:
I call the meeting to order.
Welcome to meeting number 27 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics. Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Monday, December 13, 2021, the committee is resuming its study of the use and impact of facial recognition technology.
I would like to now welcome our witnesses.
From the American Civil Liberties Union, we have Esha Bhandari. From the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic, we have Tamir Israel, staff lawyer.
I apologize for the late start. It was just a function once again—and not uncommon at this time of the year—of the timing of votes in the House of Commons. The meeting was scheduled for one hour, from 3:30 to 4:30. We will still go ahead for the full hour, starting now.
With that, I will ask Ms. Bhandari to begin.
You have the floor for up to five minutes.
:
Thank you very much, Mr. Chair.
Thank you to the committee for the invitation.
My name is Esha Bhandari, and I am a deputy director of the American Civil Liberties Union's speech, privacy and technology project based in New York. I am originally from Saint John, New Brunswick.
I'd like to speak to the committee about the dangers of biometric identifiers with a specific focus on facial recognition.
Because biometric identifiers are personally identifying and generally immutable, biometric technologies—including face recognition—pose severe threats to civil rights and civil liberties by enabling privacy violations, including the loss of anonymity in contexts where people have traditionally expected it, enabling persistent tracking of movement and activity, and identity theft.
Additionally, flaws in the use or operation of biometric technologies can lead to significant civil rights violations, including false arrests and denial of access to benefits, goods and services, as well as employment discrimination. All of these problems have been shown to disproportionately affect racialized communities.
What exactly are we talking about with biometrics?
Prior to the digital age, collection of limited biometrics like fingerprints was laborious and slow. Now we have the potential for near instantaneous collection of biometrics, including face prints. We have machine learning capabilities and digital age network technologies. All of these technological advances combined make the threat of biometric collection even greater than it was in the past.
Face recognition is, of course, an example of this, but I want to highlight that voice recognition, iris or retina scans, DNA collection, gait and keystroke recognition are also examples of biometric technology that have effects on civil liberties.
Facial recognition allows for instant identification at a distance without the knowledge or consent of the person being identified and tracked. Even in the past, identifiers that needed to be captured with the knowledge of the person, such as fingerprints, can now be collected without the knowledge of the person, which includes DNA that we shed as we go about our daily lives. Iris scans can be done remotely, and facial recognition and face prints can be collected remotely without the knowledge or consent of the person whose biometrics are being collected.
Facial recognition is particularly prone to the flaws of biometrics, which include design flaws, hardware limitations and other problems. Multiple studies have shown that face recognition algorithms have markedly higher misidentification rates for people of colour, including Black people, children and older adults. There are many reasons for this. I won't get into the specifics of that, but in part it's because of the datasets that are used but also flaws in real world conditions.
I also want to highlight that often the error rates that are shown in test conditions are exacerbated in real world conditions, which are often worse than test conditions—for example, when a facial recognition tool is being used on poor quality surveillance footage.
There are also other risks with face recognition technology when it is combined with other technology to infer emotion, cognitive state or intent. We see private companies increasingly promoting products that purport to detect emotion or affect, such as aggression detectors, based on facial tics or other movements that this technology picks up on.
Psychologists who study emotion agree that this project is built on faulty science because there is no universal relationship between emotional states and observable facial traits. Nonetheless, these video analytics are proliferating, claiming to detect suspicious behaviour or detect lies. When deployed in certain contexts, this can cause real harm, including employment discrimination if a private company is using these tools to analyze someone's face during an interview to infer emotion or truthfulness and deny jobs based on this technology.
I have been speaking, of course, about the flaws with the technology and the error rates that it has, which, again, disproportionately fall on certain marginalized communities, but there are, of course, problems even when the facial recognition technology functions and functions accurately.
The ability for law enforcement, for example, to systematically track people and their movements over time poses a threat to freedom and civil liberties. Sensitive movements can be identified, whether people are travelling to protests, to medical facilities or other sensitive locations. In recognition of these dangers from law enforcement use, at least 23 jurisdictions in the United States, from Boston to Minneapolis, and San Francisco and to Jackson, Mississippi, have enacted legislation halting law enforcement or government use of face recognition technology.
There's also, of course, the private sector use of this technology, which I just want to highlight. Again, you see trends now where, for example, landlords may be using facial recognition technology in buildings, which enables them to track their tenants' movements in and out of the building and also their guests—romantic partners and others—who come in and out of the building. We also see this use in private shopping malls and in other contexts as well—
:
Good afternoon, Mr. Chair and members of the committee.
My name is Tamir Israel and I'm a lawyer with the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic at the University of Ottawa, which sits on the traditional unceded territory of the Algonquin Anishinabe people.
I want to thank you for inviting me to participate in this important study into facial recognition systems.
As the committee has heard, facial recognition technology is versatile and poses an insidious threat to privacy and anonymity, while undermining substantive equality. It demands a societal response that's different and more proactive than that to other forms of surveillance technology.
Face recognition is currently distinguished by its ability to operate surreptitiously and at a distance. Preauthenticated image databases can also be compiled without participation by individuals, and this has made facial recognition the biometric of choice for achieving a range of tasks. In its current state of development, the technology is accurate enough to inspire confidence in its users but sufficiently error prone that mistakes will continue to occur with potentially devastating consequences.
We have long recognized, for example, that photo lineups can lead police to fixate erroneously on particular suspects. Automation bias compounds this problem exponentially. When officers using an application such as Clearview AI or searching a mug shot database are presented with an algorithmically generated gallery of 25 potential suspects matching a grainy image taken from a CCTV camera, the tendency is to defer to the technology and to presume the right person has been found. Simply including human supervision will, therefore, never be sufficient to fully mitigate the harms of this technology.
Of course, racial bias remains a significant problem for facial recognition systems as well. Even for top-rated algorithms, false matches can be 20 times higher for Black women, 50 times higher for native American men, and 120 times higher for native American women than they are for white men.
This persistent racial bias can render even mundane uses of facial recognition deeply problematic. For example, a United Kingdom government website relies on face detection to vet passport image quality, providing an efficient mechanism for online passport renewals. However, the face detection algorithm often fails for people of colour and this circumstance alienates individuals who are already marginalized by locking them out of conveniences available to others.
As my friend Ms. Bhandari mentioned, even when facial recognition is cured of its biases and errors, the technology remains deeply problematic. Facial recognition systems use deeply sensitive biometric information and provide a powerful identification capability that we know from other investigative tools such as street checks will be used disproportionately against indigenous, Black and other marginalized communities.
So far, facial recognition systems can be and have been used by Canadian police on an arrested suspect's mobile device, on a device's photo album, on CCTV footage in the general vicinity of crimes and on surveillance photos taken by police in public spaces.
At our borders, facial recognition is at the heart of an effort to build sophisticated digital identities. “Your face will be your passport” is becoming an all-too-common refrain. Technology also provides a means of linking these sophisticated identities and other digital profiles to individuals, driving an unprecedented level of automation.
At all stages, transparency is an issue, as government agencies in particular are able to adopt and repurpose facial recognition systems surreptitiously, relying on dubious lawful authorities and without any advance public licence.
We join many of our colleagues in calling for a moratorium on public safety and national security related uses of facial recognition and on new uses at our borders. Absent a moratorium, we would recommend amending the Criminal Code to limit law enforcement use to investigations of serious crimes and in the absence of reasonable grounds to believe. A permanent ban on the use of automated, live biometric recognition by police in public spaces would also be beneficial, and we would also recommend exploring a broader prohibition on the adoption of new facial recognition capabilities by federal bodies absent some sort of explicit legislative or regulatory approval.
Substantial reform of our two core federal privacy laws is also required. Bill was tabled this morning and it would enact the artificial intelligence and data act, as well as reform our private sector law, our federal law PIPEDA. Those reforms are pending and will be discussed, but beyond the amendments in Bill C-27, both PIPEDA and the Privacy Act need to be amended so that biometric information is explicitly encoded as sensitive, requires greater protection in all contexts and, under PIPEDA, requires express consent in all contexts.
Both PIPEDA and the Privacy Act should also be amended to legally require companies and government agencies to file impact assessments with the Privacy Commissioner prior to adopting intrusive technologies. Finally, the commissioner should be empowered to interrogate intrusive technologies through a public regulatory process and to put in place usage limitations or even moratoria where necessary.
Those are my opening remarks. I thank the committee for its time. I look forward to your questions.
I'm going to move to Ms. Bhandari, if I may.
I'd like to touch on something that, unfortunately, we didn't get to enough in this study, which is location tracking technologies used in commercial and retail spaces. For example, Cadillac Fairview is a big mall owner here in Canada. They tend to have cameras and other technologies in their spaces, from what I understand.
We talk a lot about the legislation in terms of the relationship of private companies with law enforcement. I'll start with you, Ms. Bhandari, and perhaps also Mr. Israel.
What are your thoughts on how we legislate that private or commercial relationship with these types of technologies going forward, in an ideal world, if there was a moratorium and we had time to think about this?
To address the question on private sector use, the harms are real. I'll highlight a few examples.
In Michigan, for example, a skating rink was using a facial recognition tool on customers who were coming in and out. A 14-year-old Black girl was ejected from the skating rink after the face recognition system incorrectly matched her to a photo of someone who was suspected of previously disrupting the rink's business.
We've seen private businesses use this type of technology, whether it's in concert venues, stadiums or sports venues, to identify people on a blacklist—customers they don't want to allow back in for whatever reason. Again, the risk of error and dignitary harms involved with that, the denial of service, is very real. There's also the fact that this tracking information is now potentially in the hands of private companies that may have widely varying security practices.
We've also seen examples of security breaches, where large facial recognition databases held by government or private companies have been revealed in public. Because these face prints are immutable—it's not like a credit card number, which you can change—once your biometrics are out there and potentially used for identity purposes, that's a risk.
Similarly, we've seen some companies—for example, Walgreens in the United States—now deploying face recognition technology that can pick out a customer's age and gender and show them tailored ads or other products. This is also an invasive practice that could lead to concerns about shoppers and consumers being steered to discounts or products based on gender stereotypes, which can further segregate our society.
Even more consequentially, it's used by employers—
:
One of the areas that we have been concerned about is the expansion of facial recognition and other biometric technology in airports. We haven't looked specifically at Nexus, but the same principle holds with, for example, the global entry system in the United States.
The concern, of course, is that as people become required to provide face prints or iris scans to access essential services—going to the airport, crossing the border, entering a government building—it facilitates a checkpoint society the likes of which we haven't seen before. These are not contexts in which people can meaningfully opt out, so one clear area of regulation could be providing people with a meaningful opt-out by saying that, if you don't want to prove your identity via an iris scan, we'll provide you the option to do so another way, with your passport, for example, or with your Nexus card, for example.
On the airport use and the border use, because it's such a coercive environment, because people don't have the choice to walk away, that has been a big concern.
:
That's certainly a very valid and salient concern.
The no-fly lists have been a long-standing problem. There have been proposals to create facial recognition-enabled lists with comparable objectives. CBSA did, in fact, pilot one for a while, and decided not to implement it yet, I think. That is something they piloted, and that's deeply problematic.
The response from CBSA has been concerning. For example, one CBC report tried to probe into the racial biases in one of those facial recognition systems. When they asked for more detailed breakdowns of error rates and racial bias rates.... First, through access to information requests, it appeared that CBSA was not aware, at the time of its adoption, that these were real. Later on, they responded that there are national security concerns with providing this type of error data, which is just not.... In other jurisdictions, this is publicly available. It's required to be publicly available by law in other jurisdictions. That's not a good approach.
More recently, there have been developments, in the sense that CBSA announced they will try to implement a biometric study hub within their infrastructure, but we haven't seen much going on yet.
Greetings to all the other committee members here today.
It's a very important subject we're covering. A lot of Canadians have genuine and sincere concerns when they consider this. As our world continues to change at such a rapid pace and we're seeing concerns about privacy continue to rise—balanced with the need for security for populations—I think Canadians want to be assured that all the possible safeguards are put in to protect individual rights and the privacy rights of Canadians.
Mr. Israel, I'll go to you first. Your report makes several interesting recommendations when it comes to what you would like to see in a legislative framework for facial recognition technology, which we do not currently have. In your report, you write that the need for legislative backing applies to border control implementations that rely on a form of consent, such as opt out or opt in.
Do you believe all use of facial recognition technology by border control should have an opt-in, opt-out component?
:
I think Ms. Bhandari touched on this as well.
Because facial recognition operates surreptitiously and doesn't have associations with things like fingerprinting that historically come out of a criminal justice kind of context, there's a bit less social stigma attached to it in the minds of people, although there shouldn't be, because it's increasingly used in the same context as mug shots, etc.
The other part of that is what Ms. Bhandari was talking about before, which is that, because it happens remotely and with less direct interference with individuals, sometimes people are just not aware of how intrusive FRT is in comparison to other biometrics, where you have to physically grab peoples' fingers or scan their eyes in a way where they're leaning in to ensure a good iris scan. For those two reasons, facial recognition has been easier to get adopted.
:
One of the main ones is that it enabled us to hold a company like Clearview accountable for creating a database of hundreds of millions of people's face prints without consent and selling it to private companies and law enforcement for a whole host of purposes.
We have been advocating for other states to adopt biometric privacy laws and, in particular, make changes, because the Illinois law is at this point quite old, and we have more knowledge about the technology and the risks of the technology.
Among the recommendations that we make in the context of states that were to adopt a legislation like the Illinois BIPA, one is clearly requiring companies to obtain notice and written consent before collecting, using or disclosing any person's identifier, and prohibiting companies from withholding services from people who choose not to consent, so that they're not given the choice of accessing a service versus not accessing the service if they're not willing to give up their biometrics.
We also urge that any legislation require businesses to delete biometric identifiers after one year of the individual's last interaction with a business. For example, if someone gave their biometrics to access a service and consented but no longer has a relationship with that business, the business shouldn't be able to hold on to and amass a database of these sensitive biometrics. As Mr. Israel mentioned, there's a risk of breach. We've seen instances of those, and there's no need for those private entities to hold on to those. We advocate adopting legislation like Illinois' BIPA but also updating it.
I am very concerned. The pilot did get a little bit interrupted by the pandemic, and I don't know how aggressively it's being moved forward now. I'm very concerned with the idea of using the pinpoint of the travel experience to encourage people to opt in and create these types of profiles, knowing that they're then going to be used against them, not just in border control contexts, where many marginalized communities are already at a massive disadvantage, but here and abroad, in other countries that end up implementing the same system. It's intended to be a global system. It's also with the idea that these same systems are going to then be used by the private sector for fraud detection or identity management in interactions with private companies.
The facial recognition component of this is a big part. All the errors there are going to, again, fall most heavily on visible minorities and members of marginalized communities. Then the other assessment and social ranking mechanisms that are included in this identity verification program that will sit on your device and be linked to through your facial recognition also tend to weigh very heavily and disproportionately against members of marginalized communities.
I think this is not the way to go, personally.
Through you, I'd like to really thank these two witnesses. They've been outstanding and I really do appreciate their insights.
For the two witnesses, if you don't know how committees work here in the House of Commons, we actually have to receive written or verbal testimony for us to make recommendations. We have to hear something on the issue before we can go forward on it.
I would like to take this on a different tack. I'll ask both witnesses this today.
Mr. Israel, in response to Mr. Kurek, you were talking about transparency—or the other side, the opacity—of these FRT systems and how they surreptitiously take pictures and identify people. I guess my question would be this: Are either of you aware of a registry of companies that engage in facial recognition technology? Is there a list somewhere of companies, governments or agencies of governments that engage in capturing images for the purposes of FRT?