:
I call the meeting to order.
Welcome to meeting number 19 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.
[English]
Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Monday, December 13, 2021, the committee is resuming its study of the use and impact of facial recognition technology.
Today’s meeting is taking place in a hybrid format, pursuant to the House order of November 25, 2021. Members are attending in person in the room and remotely by using the Zoom application.
I have a couple of comments for the benefit of witnesses. We have witnesses in the room and witnesses participating by Zoom. Please wait until I recognize your name before speaking. If you are participating by Zoom, click on the microphone icon to activate your mike, and please mute yourself when not speaking. In the room, your mike should be controlled—you shouldn't have to hit the button—but just be aware and make sure that your microphone is lit up before you speak. I'll remind you that comments should be addressed through the chair.
Now I would like to welcome our witnesses.
We have, from Microsoft, Owen Larter, director responsible for artificial intelligence public policy; and from the National Council of Canadian Muslims, we have Mustafa Farooq, chief executive officer; and Rizwan Mohammad, advocacy officer.
We will start with Mr. Larter.
You have up to five minutes for your opening statement.
[Translation]
Good afternoon, everyone.
[English]
Thank you very much, Mr. Chair and vice-chairs, for the opportunity to contribute today.
My name is Owen Larter. I'm in the public policy team in the Office of Responsible AI at Microsoft.
There are really three points that I want to get across in my comments today.
First, facial recognition is a new and powerful technology that is already being used and for which we now need regulation.
Second, there is a particular urgency around regulating police use of facial recognition, given the consequential nature of police decisions.
Third, there is a real opportunity for Canada to lead the way globally in shaping facial recognition regulation that protects human rights and advances transparency and accountability.
I want to start by applauding the work of the committee on this really important topic. We at Microsoft are suppliers of facial recognition. We do believe that it can bring real benefits to society. This includes helping secure devices and assisting people who are blind or with low vision to access more immersive social experiences. In the public safety context, it can be used to help find victims of trafficking and as part of the criminal investigation process.
However, we are also clear-eyed about the potential risks of this technology. That includes the risk of bias and unfair performance, including across different demographic groups; the potential for new intrusions into people's privacy; and possible threats to democratic freedoms and human rights.
In response to this, in recent years we've developed a number of internal safeguards at Microsoft. They include our facial recognition principles. It includes the creation of our Face API transparency note. This transparency note communicates in language that is aimed at non-technical audiences how our facial recognition works, what its capabilities and limitations are and the factors that will affect performance, all with a view to helping customers understand how to use it responsibly.
Facial recognition work builds on Microsoft's broader responsible AI program. This is a program that ensures colleagues are developing and deploying AI in a way that adheres to our principles. The program includes our cross-company AI governance team and our responsible AI standard, which is a series of requirements that colleagues developing and deploying AI must adhere to. It also includes our process for reviewing sensitive AI uses.
In addition to these internal safeguards, we also believe that there is a need for regulation. This need is particularly acute in the law enforcement context, as I mentioned. We really do feel that the importance of this committee's work cannot be overstated. We commend the way in which it is bringing together stakeholders from across society, including government, civil society, industry and academia to discuss what a regulatory framework should look like.
We note that while there has been positive progress in places like Washington state in the U.S., with important ongoing conversations in the EU and elsewhere, we do believe that Canada has an opportunity to play a leading role in shaping regulation in this space.
We think that type of regulation needs to do three things. It needs to protect human rights, advance transparency and accountability, and ensure testing of facial recognition systems in a way that demonstrates they are performing appropriately.
When it comes to law enforcement, there are important human rights protections that regulations need to cover, including prohibiting the use of facial recognition for indiscriminate mass surveillance and prohibiting use on the basis of an individual's race, gender, sexual orientation or other protected characteristics. Regulations should also ensure it's not being used in a way that chills important freedoms, such as freedom of assembly.
On transparency and accountability, we think law enforcement agencies should adopt a public use policy setting out how they will use facial recognition, setting out the databases they will be searching and how they will task and train individuals to use the system appropriately and to perform human review. We also think vendors should provide information about how their systems work and the factors that will affect performance.
Importantly, systems must also be subject to testing to ensure they are performing accurately. We recommend that vendors of facial recognition like Microsoft make their systems available for reasonable third party testing and implement mitigation plans for any performance gaps, including across demographic groups.
We also think that organizations deploying facial recognition must test systems in operational conditions, given the impact that environmental factors like lighting and backdrop have on performance. In the commercial setting, we think regulation should require conspicuous notice and express opt-in consent for any tracking.
I'll close my remarks by saying that we commend many of the elements of the provincial and federal privacy commissioners' recommendations from earlier this week, which set out important elements of the legal framework for facial recognition.
Thank you very much.
:
Thank you, Mr. Chair and members of the committee, for the opportunity to offer our thoughts on this study.
My name is Rizwan Mohammad, and I'm an advocacy officer with the National Council of Canadian Muslims, the NCCM. I'm joined today by NCCM CEO Mustafa Farooq. I'd also like to thank NCCM intern Hisham Fazail for his work on our submission.
Today we want to look at the heart of the problem with facial recognition technology, or FRT. A number of national security and policing agencies, as well as other government agencies, have come before you to tell you how FRT is an important tool that has great potential use across government. You've been told that FRT can help escape problems of human cognition and bias.
Here are some other names that you all know, names affiliated with times that these same agencies told you that surveillance would be done in ways that were constitutionally sound and proportionate. The are Maher Arar, Abdullah Almalki and Mohamedou Ould Slahi.
The same agencies that lied to the Canadian people about surveilling Muslim communities are coming before you now to argue that while mass surveillance will not be happening, FRT can and should be used responsibly. Those agencies, like the RCMP, have already been found to have broken the law according to the Privacy Commissioner when it comes to FRT.
We are thus making the following two recommendations, and we want to be clear that our submissions are limited to exploring FRT in the non-consumer context.
First, we recommend that the government put forth clear and unequivocal privacy legislation that severely curtails how FRT can be utilized in the non-consumer context, allowing only for judicially approved exceptions in the context of surveillance.
Second, we recommend that the government set out clear penalties for agencies caught violating rules around privacy and FRT.
Let us begin with the first recommendation, calling for a blanket ban on FRT across the government without judicial authorization in the context of any and all national security agencies, including but not exclusive to the RCMP, CSIS, and the CBSA. You know the reasons for this already. A 2018 report in the U.K. found new figures showing that facial recognition software used by the U.K. Metropolitan Police returned incorrect matches in 98% of cases. Another study from 2019, which drew on a different methodology, reported that the Metropolitan Police returned incorrect matches, or a false positive rate, in 38% of cases.
We are well aware that FRT works differently, and with different accuracy results, depending on the technology, but we all acknowledge as a matter of fact that there are algorithmic biases when it comes to FRT. Given what we know, given the privacy risks that FRT poses, and given that Canadians, including members on other committees in this House, have raised concerns around systemic racism in policing, we agree with other witnesses who have appeared before this committee in calling for an immediate moratorium on all uses of FRT in the national security context and for the RCMP until legislative guidelines are developed.
Simultaneously, we recommend that in developing legislative guidelines, a very high threshold be utilized, including judicial authorization, oversight and timeline limitations.
Secondly, we are shocked by the blasé attitude that the RCMP has taken in approaching the issue of its use of Clearview AI. First the RCMP denied using Clearview AI, but then confirmed it had been using the software after news broke that the company's client list had been hacked. An excuse was given that the use of FRT wasn't known widely in the RCMP. The false answer the RCMP gave to the Privacy Commissioner, which was as credible as the “dog ate my homework” excuse, was completely unacceptable.
The RCMP then had the audacity, after the Privacy Commissioner's findings in the report, to state that it did not necessarily agree with the findings. While the RCMP has taken certain steps to ameliorate the concerns raised, a failure of accountability, when it comes to clear errors and misleading statements, must require clear penalties. Otherwise, how can we trust any such process or commitment to avoid mass surveillance?
We encourage this committee to recommend that strong penalties be assessed against agencies and officers who may breach the rules created around FRT, potentially through an amendment to the RCMP Act. We will provide the committee with a broader written brief in due course.
Subject to any questions, these are our submissions.
Thank you.
:
Thank you very much for the question.
It is the case that we don't sell facial recognition to local police in the U.S. I think our position is that it's really important to get law in place that can protect human rights in the context of facial recognition. I think one of the challenges in the U.S. is that there is no law on that front. There isn't any privacy law, the type of privacy law that you have in a lot of other countries, including in Canada, although I'm aware of ongoing conversations around how the privacy framework in Canada can be improved and that they are important conversations to have as well.
That's our position. That's why we're using our voice proactively, to attend conversations like this and contribute to important work like this to make sure that we can get in place some robust regulation for the use of facial recognition, with particular urgency around police and more broadly to make sure that the technology is being used in a way that is transparent, accountable and rights-protecting.
:
We have our broader responsible AI program, which we have been developing for the last few years. It has a few components. We have a company-wide AI governance team. This is a multi-stakeholder team with some of our Microsoft researchers. These are world-leading AI researchers sharing knowledge about where the technology is and around where the state-of-the-art technology is going. They come together with people working on legal and policy issues and people with an engineering background to oversee the general program.
In terms of the other components, we also have a responsible AI standard. This is a set of requirements across our six AI principles, which I can go into detail on, that ensure that any teams that are developing AI systems or deploying AI systems are doing so in a way that meets our principles.
The final piece we have is also a “sensitive use” review process. This comes into play when any potential development or deployment of a system hits one of three potential triggers. Any time a system is going to be used in a way that affects an individual's legal opportunities or legal standing, any time there is a potential for psychological or physical harm, or any time there is an implication for human rights, then the governance team that I mentioned will come together and review whether we can move forward with a particular deployment or development of AI to ensure that it's being done in a responsible way.
You can imagine that those conversations apply across all of our systems, including the discussions we're having on facial recognition.
:
We think that this is a really important part of the conversation, and it's for a number of reasons.
The accuracy of facial recognition has improved markedly in recent years. There's some very good research being done by the National Institute of Standards and Technology in the U.S., or NIST, that shows that accuracy has improved markedly for the best-performing systems in recent years. There is, however, a very wide gap between the best-performing systems and the least well-performing systems, and the less accurate systems tend to be more discriminatory as well, so we think testing is really important.
There are a couple of components to it. We think that vendors like Microsoft should allow for their systems to be tested by independent third parties in a reasonable fashion, so we allow for that at the moment via an API. A third party can go and test our system to see how accurate it is. We think that vendors should be required to respond to any testing and address any material performance gaps, including across demographics, so that's one thing: vendors doing something on the testing side.
We also think it's really very important that organizations deploying a facial recognition service test it in operational conditions. If you are a police customer and you're using a facial recognition system, you shouldn't just take the word of the vendor that it's going to be accurate in the abstract; you also need to test it in operational conditions. That's because environmental factors like image quality or camera positions have a really big impact on accuracy.
You can imagine that if you have a camera that is placed looking down on someone's head and there are smudges on the lens or poor quality imagery going into the system in general, it's going to have a really big impact on performance; therefore, there should also be a testing requirement for organizations deploying facial recognition to make sure that they know that it is working accurately in the environment in which it's going to be used.
Thank you to all the witnesses for joining us today. I'd also like to start with you, Mr. Larter.
I was reading an article written by Microsoft's Brad Smith in 2018 that covers a lot of issues similar to those you are talking about today. Facial recognition technology was being developed, and Microsoft was calling on government to impose regulations on the industry.
I'm wondering if you could reflect on how it works when tech giants can come up with this technology and then ask governments to regulate it. Is that how it should work? Are there better ways that we can maybe bring governments in as technology is being developed?
I'm just hoping you can reflect on that a bit.
:
It's a really important question, and we definitely think that it's for government to play a leading role in creating a regulatory framework for technology in general, including technologies like facial recognition.
We've tried to do a couple of things over the last few years. First was to implement internal safeguards so that we're doing our bit as a vendor of facial recognition to make sure that the technology is being used responsibly. I talked about our responsible AI program. We also have our Face API transparency note, which I think is a really important part of the conversation and hits at this need for transparency around how facial recognition is developed and deployed.
This transparency note is a document that we make publicly available, and it is clear about how a system works in terms of some of the capabilities of the technology, limitations about the technology and what it shouldn't be used for and the factors that will affect performance, so that a customer using the technology is well informed and able to make informed and responsible deployment decisions.
That's some of what we've been doing internally. We do also think—because it's really important to build trust in technology in general and particularly in facial recognition, given some of the potential risks it can raise, which I mentioned in my remarks—that there is also a need for a regulatory framework.
We are keen to support those conversations. That's why we're very happy to be invited to discussions like this today. We really want to contribute our knowledge around how the technology works and where it is going so that we can create, led by governments and in conjunction with others across society like civil society, a good, robust regulatory framework for technology so that the benefits of this powerful technology can be realized in a way that also addresses some of the challenges.
:
It's a very good question. I would say it is increasingly used. It is a technology that can have a lot of benefits, and I think individuals and organizations are realizing that.
There are a few different applications. A lot of them have to do with security, such as verification using facial recognition. For example, when you're logging in to your phone or your computer, often that is done through a facial recognition tool now. Frictionless and contactless check-in at airports would be another example of how facial recognition is being used, which has been particularly important over the last couple of years during the depths of the COVID crisis, obviously.
Beyond that, I think there are some really beneficial applications in the accessibility context. There are a number of organizations doing really interesting research around how you can use facial recognition to help those who are blind or with low vision better understand and interact with the world around them. We had a project called Project Tokyo, which involved facial recognition, and it used a headset so that a blind individual would be able to scan a room—let's say a canteen or an open space at work—and if there was someone who had enrolled in the system and consented to be part of this individual's facial recognition system, he or she would be able to identify that person and be able to go over proactively and start a conversation in a way that would be very difficult otherwise.
Another application that I think a lot of people in the accessibility community are excited about is facial recognition for people with Alzheimer's or similar diseases that make it increasingly difficult to remember or recognize friends and loved ones. You can imagine the way in which facial recognition is now being explored to help prompt individuals to be able to recognize those friends and loved ones.
It's becoming a long answer, but I'll round off by saying there are also positive applications in the law enforcement context as well. We do think that as part of the criminal investigation process, facial recognition, with robust safeguards around it, can be a useful investigative tool. It's also being used for online identification of missing and trafficked individuals, including children, in a way that has been very beneficial as well.
There are some real benefits there, but, again, there are the challenges that I also mentioned, which is why you need a regulatory framework that can realize those benefits in a way that addresses the challenges.
:
It protects Microsoft to have a framework in place, but not necessarily for the reasons that were mentioned. We generally think that it's really important to build a regulatory framework for technology in general that engenders trust and shows that technology is being used in a trustworthy fashion.
We have been around for a while now. We're almost 50 years old as a company, and we realize that if society is going to reap the benefits of technology and if people are going to use it, they need to trust it. Regulation is a really important part of building that trustworthiness framework. That's what we advocate for in general, and that's particularly why we are investing time in trying to advocate robust safeguards around facial recognition, given that it is a very powerful technology with some very positive applications, as I mentioned, but potentially some challenges as well.
Creating a framework around facial recognition that ensures it can be used in a trustworthy way, and in a way the public sees is trustworthy as well, is very important for society so that it can reap the benefits and make sure that this technology is used over the longer term.
:
That's a good question. I would say thank you very much for the invitation to submit materials. We would really appreciate that opportunity. We think the work the committee is doing here is very important, and we want to be as supportive and helpful as possible. I appreciate the opportunity to send some materials, and we will do that.
In terms of the metaverse, everyone is getting very excited about the opportunities there, and I think that is right. There will be a number of technologies that go into creating the metaverse and ensuring that it is performing in a way that people are excited about and is responsible.
I think facial recognition will be one of those technologies, alongside a whole host of other technologies. The metaverse—we call it it the “multiverse” at Microsoft—offers a huge number of opportunities that we're really only just starting to explore as a society. There's a big conversation that we should have around exactly what we want the metaverse to look like and what the safeguards are that we need there to make sure we're reaping the benefits of the technology and addressing some of the challenges.
Facial recognition will definitely be part of that in all kinds of ways that we probably can't even fully appreciate at this point.
I think the reality is that the answer is yes, we think there is a high possibility of this happening.
The reality is that we get calls all the time, which people don't hear about, from folks who are undergoing surveillance from CSIS, from the RCMP, and the issues that result out of that. The reality is that this is across the sector. We know already that the CBSA pilot-tested a piece of technology called AVATAR at the airports, which was supposed to be a sort of lie detector that's been used and that, by the way, has now been banned in other jurisdictions. We have grave concerns for how this technology can continue to be weaponized to profile people for potential terrorism.
:
Thank you very much, Mr. Chair, and thank you to our witnesses here today.
Let me start, as I often do, by inviting the witnesses to feel free to submit further documentation to this committee if they, in testimony today, are not able to have an adequate chance to expound on their answers. It is certainly welcome, and it helps us.
Mr. Larter, as an example to frame my question, in the initial design of cameras, the chemicals used were specifically created around the acknowledgement of generally a white person's face. I've done some reading and seen some documentation on that being the case, so there are technical limitations to FRT.
I'm wondering if you can comment on whether Microsoft has taken that into account in the development of its FRT, and on the possible implications that would have specifically when it comes to things like different races, genders, etc.
:
That's a really important question, so thank you for it.
As I mentioned, I do think one of the big risks that need to be addressed through regulation is the potential risk of discriminatory performance in facial recognition technology. Something we've been very mindful of as we've been developing our technology is making sure we have representative datasets that we're training the facial recognition on so that it can perform accurately, including across different demographic groups.
We would say that this is where the testing piece is very important and that you don't just take our word for it. We think it's important that vendors make available their facial recognition for that reasonable, independent third party testing that I mentioned, so that you're able to scrutinize how companies selling facial recognition are doing in terms of the algorithms they are building. That type of scrutiny, I think, is really important in terms of raising the bar—
:
Quite simply, it's very hard to engage in a conversation when basic facts aren't being acknowledged.
When CSIS tells us that they're not going to answer a basic question—which is the same question they haven't answered for you right now—about whether facial recognition technology is being used, it becomes very hard to get any sense of accountability. It becomes very hard to have a conversation. When the RCMP tells us one thing, tells Canadians one thing and tells the Privacy Commissioner one thing, it becomes very hard to have a good-faith, honest conversation about what the future could actually look like.
I think all of us are interested in a world in which law enforcement uses facial recognition technology responsibly. Folks are right when they say that there are potential good-use cases, especially in child pornography and cases like that. The reality is that our agencies here are simply not meeting the standard that Canadians expect them to, for all of the reasons that you all know about, vis-à-vis systemic racism and so many of these other challenges.
:
That's a really important question, and it's one of the major risks that we think needs to be addressed around facial recognition use.
I'll come back to what I said before in terms of the internal safeguards we've mentioned. One of the most important is the testing piece. We make sure that we are opening up our facial recognition system to independent third party testing to make make sure that we are training and testing it ourselves in such a way that we are confident it is performing accurately and in a way that minimizes gaps across different demographic groups.
I would really like to emphasize as part of my contribution today that the testing piece is a really important part of making sure that technology is performing in an accurate manner, which it can do. It's made incredible strides in the best-performing algorithms in recent years, but there are many algorithms out there that aren't as accurate. You need to be able to test them to make sure that when, for example, police are using them, they are using the most accurate systems.
Through you, Mr. Chair, to Mr. Larter, I alluded to this, but I want to make sure that there's 100% clarity in my ask.
Mr. Larter, I'm asking you and requesting that for the purpose of this study in this committee, you provide this committee with a list of all contracts, both present and past, related to our public safety—government-related, military-related, law enforcement and police agencies in Canada—given that you were unable to provide that testimony here today. Do you understand that request?
:
Thank you very much. I do appreciate that.
Through you, Mr. Chair, to Mr. Farooq, I believe I heard in your opening comments, Mr. Farooq, some talk around legislative reforms. I want to underscore my perception of where we're at in this country with testimony we've heard previously from our other guest witness and ask if you have contemplated within your submission, with specificity, different ways in which we can tighten up our framework to ensure that we have knowledge of the use, that we have accountability of the use and that it's done in a way that is in accordance with our charter rights.
I'm just wondering if you can expand on any of your earlier comments about some of the legislative improvements you feel we should make.
:
Absolutely, and thank you for this very important question.
We will provide a longer exploration in a brief submission, but what I would say in general is, first of all, that I think the banning of real-time FRT in places like airports and our borders is important as we think about a general categorization.
In terms of investigative tools, while we're calling for a moratorium until these policies are developed, we think the set-up should be very similar to how it works when the police are trying to obtain any search warrant: that they appear before a judge and they put forward their argument and their best-case scenario, with clear documentation, which is then provided to the public. We'll provide specific submissions on the sections and subsections that we think need to be amended.
:
Thank you, Mr. Chair, and I apologize that I'm not with you guys in person today. I'm dealing with overland flooding in the riding, and in my own yard.
First I want to direct my questions to Mr. Farooq and Mr. Mohammad. I want to drill down deeper, because as we go forward in this regulatory process, I want to make sure that we check all the boxes of which legislation we need to focus in on.
You've already mentioned CSIS and the RCMP. You've also talked about the Criminal Code amendments that are going to have to happen, as well as Privacy Act and PIPEDA. I know that under national defence, the CSE is mainly listening in on online chatter. Maybe it has the formula we need, because for it to listen to any Canadian or to any of our Five Eyes allies, it can't do indirectly what you're not allowed to do directly. It has to get warrants or ministerial authorizations for issues surrounding national security and national defence.
Is that what you're suggesting are the steps we need to take to ensure the charter rights of Canadians are protected?
:
I think this is a fundamental question as part of the regulatory discussion. I think deciding what is a permissible use and what is not a permissible use is very important.
We have some suggestions on this front. We think that indiscriminate mass surveillance is not something that should be permitted. We also think that discriminating against an individual on the basis of race, gender, sexual orientation or other protected characteristics should be prohibited.
Also, the democratic freedoms piece, which we discussed today, is really important, and I'm pleased to hear that it's part of the discussion. That is one to address as well in making sure that the technology is not used in a way that undermines fundamental freedoms like freedom of assembly. I think those are some core uses that we would suggest.
One that is maybe more specific to the law enforcement context as well is that we think it's important that the output of facial recognition is not used as the only reason or the only piece of evidence to take a material decision—for example, to arrest someone.
I think Ms. Khalid, in her line of questioning, raised a very important point as it relates to creating a legal framework, and that is what I'll call “the duty of candour” from our security agencies, police, military and CBSA in how they're using these.
Through you, Mr. Chair, to Mr. Farooq, I want to note an August 31, 2021, Globe and Mail report on the court admonishing CSIS once again for the duty of candour. They noted some other cases in which breaches had occurred in the way they sought warrants and the way in which they sought to surreptitiously surveil Canadians unlawfully, quite frankly.
Through you, Mr. Chair, to Mr. Farooq, in your experience, doing the advocacy work that you do—because now we're on the human side of the application of the tool—could you perhaps share with us instances in which our security and public safety agencies may not have been forthcoming about the way in which they were surveilling members of the Muslim community?
My thanks to all of our witnesses today.
With that, I'm going to suspend. We will resume in camera.
I'll ask our witnesses, with our thanks, to leave the room relatively quickly. We will carry on in camera in a moment.
The meeting is suspended.
[Proceedings continue in camera]