:
I call this meeting to order. Welcome to meeting number 15 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.
Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Monday, December 13, 2021, the committee is resuming its study of the use and impact of facial recognition technology.
Today’s meeting is taking place in a hybrid format, pursuant to the House order of November 25, 2021. Members are attending in person in the room and remotely using the Zoom application. Per the directive of the Board of Internal Economy on March 10, 2022, all those attending the meeting in person must wear a mask, except for members who are at their place during proceedings.
For those participating by video conference, click on the microphone icon to activate your mike. Please mute your mike when you are not speaking.
For witnesses participating for the fist time, in this type of meeting you have the option for interpretation. At the bottom of your screen, you can select floor, which is in either language, or French or English for translation. For those in the room, you can use the earpiece and select the desired channel.
I would remind everyone that all comments should be addressed through the chair.
Members in the room should raise their hand to speak. For members on Zoom, please use the “raise hand” function. The clerk and I will manage the speaking order as best we can. We appreciate your patience and understanding.
I welcome all of our witnesses. We have four witnesses this morning: Dr. Rob Jenkins, professor, University of York; Mr. Sanjay Khanna, strategic adviser and foresight expert; Ms. Angelina Wang, computer science graduate researcher, Princeton University; and Dr. Elizabeth Anne Watkins, post-doctoral research associate, Princeton University.
We will begin with Dr. Jenkins.
You have five minutes for your opening statements.
Thank you, Mr. Chair and members of the committee.
My name is Rob Jenkins. I'm a professor of psychology at the University of York in the U.K., and I speak to the issue of face recognition from the perspective of cognitive science.
I'd like to begin by talking about expectations of face recognition accuracy and how actual performance measures up to these expectations.
Our expectations are mainly informed by our experience of face recognition in everyday life, and that experience can be highly misleading when it comes to security and forensic settings.
Most of the time we spend looking at faces, we're looking at familiar faces, and by that I mean the faces of people we know and have seen many times before, including friends, family and colleagues. Humans are extremely good at identifying familiar faces. We recognize them effortlessly and accurately, even under poor viewing conditions and in poor quality images. The everyday success of face recognition in our social lives can lead us to overgeneralize and to assume that humans are good at recognizing faces generally. We are not.
Applied face recognition, including witness testimony, security and surveillance, and forensic face matching, almost always involves unfamiliar faces, and by that I mean the faces of people we do not know and have never seen before.
Humans are surprisingly bad at identifying unfamiliar faces. This is a difficult task that generates many errors, even under excellent viewing conditions and with high quality images. That is the finding not only for randomly sampled members of the public but also for trained professionals with many years of experience in the role, including passport officials and police staff.
It is essential that we evaluate face recognition technology, or FRT, in the context of unfamiliar face recognition by humans. This is partly because the current face recognition infrastructure relies on unfamiliar face recognition by humans, making human performance a relative comparison, and partly because, in practice, FRT is embedded in face recognition workflows that include human operators.
Unfamiliar face recognition by humans, a process that is known to be error prone, remains integral to automatic face recognition systems. To give one example, in many security and forensic applications of FRT, an automated database search delivers a candidate list of potential matches, but the final face identity decisions are made by human operators who select faces from the candidate list and compare them to the search target.
The U.K. “Surveillance Camera Code of Practice” states that the use of FRT “...should always involve human intervention before decisions are taken that affect an individual adversely”. A similar principle of human oversight has been publicly adopted by the Australian federal government: “decisions that serve to identify a person will never be made by technology alone”.
Human oversight provides important safeguards and a mechanism for accountability; however, it also imposes an upper limit on the accuracy that face recognition systems could achieve in principle. Face recognition technologies are not 100% accurate, but even if they were, human oversight bakes human error into the system. Human error is prevalent in these tasks, but there are ways to mitigate it. Deliberate efforts, either by targeted recruitment or by evidence-based training, must be made to ensure that the humans involved in face recognition decisions are highly skilled.
Use of FRT in legal systems should be accompanied by transparent disclosure of the strengths, limitations and operation of this technology.
If FRT is to be adopted in forensic practice, new types of expert practitioners and researchers are needed to design, evaluate, oversee and explain the resultant systems. Because these systems will incorporate human and AI decision-making, a range of expertise is required.
Thank you.
:
Mr. Chair, thank you very much for the opportunity to speak to you and members. I will be speaking about facial recognition technology in terms of the individual, digital society and government.
I am a consultant in the areas of strategic foresight, scenario planning and global change, and I am an adjunct professor in the Master of Public Policy in Digital Society program at McMaster University.
A key foresight method that I use for planning for the future is scenario planning. As Canada navigates the most uncertainty it has faced since the start of the post-war period, scenario planning can play a role in helping legislators to inform resilient strategy and public policy. I see the following as important issues to address with facial recognition, which I will refer to as FRT.
One, people are being targeted by FRT without meaningful consent and/or in ways they do not understand. Two, societies that are increasingly unequal include populations of people who cannot advocate for their interests related to FRT's current or possible use. Three, legislators will always be behind the curve if they do not take the time to explore the plausible futures of digital society and the role of novel technologies such as FRT within them.
I will speak to these concerns from the perspectives of the individual, of society and of government.
In terms of the individual, our faces open doors for us and can lead to doors being closed on us. We experience biases across the spectrum from negative to positive and implicit to explicit based on how our faces are perceived and on other factors related to our appearance. This fundamental reality shapes our lives.
With an FRT-enabled world, what might it mean to be recognized by technical systems in which FRT is embedded?
What might it mean for FRT to be combined with sentiment analysis to quickly identify feelings at vulnerable moments when a person might be swayed or impacted by commercial, social or political manipulation?
What might it mean for a person to be identified as a potential social, political or public safety threat by FRT embedded into security robots?
What might it mean for a person to be targeted as a transactional opportunity or liability by FRT embedded into gambling or commercial services?
Technologies associated with FRTs, such as big data, machine learning and artificial intelligence, amplify these potential risks and opportunities of FRT and other biometric technologies. While some individuals may welcome FRT, many are concerned about being targeted and monitored. In cases in which rights are infringed, individuals may never know how or why; companies may choose not to reveal the answers, and there may not be meaningful consent.
In such cases, there will be no accessible remedies for individuals impacted by commercial, legal or human rights breaches.
In terms of digital society, Canadian society faces unprecedented challenges. Rising social and racial inequalities in our country have been worsened greatly by the pandemic. Canadians are experiencing chronic stress and declining physical and mental health. Social resilience is undermined by disinformation and misinformation. Canada is addressing new and threatening challenges to the post-war order. The climate crisis is a co-occurring threat multiplier.
Despite these challenges, major technology companies are profiting from opportunities amidst the unprecedented risk and so have gained additional leverage in relation to government and our digital society. In the process, a few companies have accrued considerable power with trillion-dollar-plus valuations, large economic influence and a lock on machine learning and artificial intelligence expertise.
As I speak, technology leaders are imagining the next FRT use cases, including how FRT might be used more widely in business, government and industry. Some tech companies are exploring threats and opportunities that would justify use cases that may be unlawful today but could be viable in new circumstances, from a change in government to a shocking security event to changes in labour laws.
In terms of government, a society facing constant disruption has not proved to be a universally safe one for Canadians. The realities of harms and potential harms to individuals and of the risks and opportunities for business and government puts effective governance in the spotlight. At a time of unprecedented risk, parliamentarians have a responsibility to make sense of societal change and to comprehend plausible futures for FRT amidst the use of sophisticated surveillance systems in “smarter” cities, growing wealth and income inequality, threatened rights of children and marginalized communities.
Creating effective law and policy related to FRT should involve due contemplation of plausible futures.
I respect that for you, as legislators, this is a challenging task, given the often short-term horizons of elected individuals and parties. However, prospective thinking can complement the development of legislation to deal with novel and often unanticipated consequences of technologies as potent as FRT, which is inextricably linked with advances in computer vision, big data, human computer interaction, machine learning, artificial intelligence and robotics.
:
Hi, I'm Angelina Wang, a graduate researcher in the computer science department at Princeton University. Thank you for inviting me to speak today.
I will give a brief overview of the technology behind facial recognition, as well as highlight some of what are, in my view, the most pertinent technical problems with this technology that should prevent it from being deployed.
These days, different kinds of facial recognition tasks are generally accomplished by a model that has been trained using machine learning. What this means is that rather than any sort of hand-coded rules, such as that two people are more likely to be the same if they have the same coloured eyes, the model is simply given a very large dataset of faces with annotations, and instructed to learn from it. These annotations include things like labels for which images are the same person, and the location of the face in each image. These are typically collected through crowdsourcing on platforms like Amazon Mechanical Turk, which has been known to have homogeneous worker populations and unfavourable working conditions. The order of magnitude of these datasets is very large, with the minimum being around 10,000 images, and the maximum going up to millions. These datasets of faces are frequently collected just by scraping images off the Internet, from places like Flickr. The individuals whose faces are included in this dataset generally do not know their images were used for such a purpose, and may consider this to be a privacy violation. The model uses these massive datasets to automatically learn how to perform facial recognition tasks.
It’s worth noting here that there is also lots of pseudoscience on other kinds of facial recognition tasks, such as gender prediction, emotion prediction, and even sexual orientation prediction and criminality prediction. There has been warranted backlash and criticism of this work, because it's all about predicting attributes that are not visually discernible.
In terms of what some might consider to be more legitimate use cases of facial recognition, these models have been shown over and over to have racial and gender biases. The most prominent work that brought this to light was by Joy Buolamwini and Timnit Gebru called “Gender Shades”. While it investigated gender prediction from faces, a task that should generally not be performed, it highlighted a vitally important flaw in these systems. What it did was showcase that hiding behind the high accuracies of the model were very different performance metrics across different demographic groups. In fact, the largest gap was a 34.4% accuracy difference between darker skin-toned female people and lighter skin-toned male people. Many different deployed facial recognition models have been shown to perform worse on people of darker skin tones, such as multiple misidentifications of Black men in America, which have led to false arrests.
There are solutions to these kinds of bias problems, such as collecting more diverse and inclusive datasets, and performing disaggregated analyses to look at the accuracy rates across different demographic groups rather than looking at one overall accuracy metric. However, the collection of these diverse datasets is itself exploitative of marginalized groups by violating their privacy to collect their biometric data.
While these kinds of biases are theoretically surmountable with current technology, there are two big problems that the current science does not yet know how to address. These are the two problems of brittleness and interpretability. By brittleness, I mean that there are known ways that these facial recognition models can break down and allow bad actors to circumvent and trick the model. Adversarial attacks are one such method, where someone can manipulate the face presented to a model in a particular way such that the model is no longer able to identify them, or even misidentify them as someone completely different. One body of work has shown how simply putting a pair of glasses that have been painted a specific way on a face can trick the model into thinking one person is someone entirely different.
The next problem is one of interpretability. As I previously mentioned, these models learn their own sets of patterns and rules from the large dataset they are given. Discovering the precise set of rules the model is using to make these decisions is extremely difficult, and even the engineer or researcher who built the model frequently cannot understand why it might perform certain classifications. This means that if someone is misclassified by a facial recognition model, there is no good way to contest this decision and inquire about why such a decision was made in order to get clarity. Models frequently rely on something called “spurious correlations,” which is when a model uses an unrelated correlation in the data to perform a classification. For example, medical diagnosis models may be relying on an image artifact of a particular X-ray machine to perform classification, rather than the actual contents in the image. I believe it is dangerous to deploy models for which we have such a low understanding of their inner workings in such high-stakes settings as facial recognition.
Some final considerations I think are worth noting include that facial recognition technologies are an incredibly cheap surveillance device to deploy, and that makes it very dangerous because of how quickly it can proliferate. Our faces are such a central part of our identities, and generally do not change over time, so this kind of surveillance is very concerning. I have only presented a few technical objections to facial recognition technology today, and taken as a whole with the many other criticisms, I believe the enormous risks of facial recognition technology far outweigh any benefits that can be gained.
Thank you.
:
Thank you for the chance to speak today.
My name is Elizabeth Anne Watkins and I am a post-doctoral research fellow at the Center for Information Technology as well as the human-computer interaction group at Princeton University, and an affiliate with the Data & Society research institute in New York.
I'm here today in a personal capacity to express my concerns with the private industry use of facial verification on workers. These concerns have been informed by my research as a social scientist studying the consequences of AI in labour contexts.
My key concerns today are twofold: one, to raise awareness of a technology related to facial recognition yet distinct in function, which is facial verification; and two, to urge this committee to consider how these technologies are integrated into sociotechnical contexts, that is, the real-world humans and scenarios forced to comply with these tools and to consider how these integrations hold significant consequences for the privacy, security and safety of people.
First I'll give a definition and description of facial verification. Whereas facial recognition is a 1:n system, which means it both finds and identifies individuals from camera feeds typically viewing large numbers of faces, usually without the knowledge of those individuals, facial verification, on the other hand, while built on similar recognition technology, is distinct in how it's used. Facial verification is a 1:1 matching system, much more intimate and up close where a person's face, directly in front of the camera, is matched to the face already associated with the device or digital account they're logging in to. If the system can see your face and predict that it's a match to the face already associated with the device or account, then you're permitted to log in. If this match cannot be verified, then you'll remain locked out. If you use Face ID on an iPhone, for example, you've already used facial verification.
Next I'll focus on the sociotechnical context to talk about where this technology is being integrated, how and by whom. My focus is on work. Facial verification is increasingly being used in work contexts, in particular gig work or precarious labour. Amazon delivery drivers, Uber drivers and at-home health care workers are already being required in many states in the U.S., in addition to countries around the world, to comply with facial verification in order to prove their identities and be allowed to work. This means the person has to make sure their face can be seen and matched to the photo associated with the account. Workers are typically required to do this not just once, but over and over again.
The biases, failures and intrinsic injustices of facial recognition have already been expressed to this committee. I'm here to urge this committee to also consider the harms resulting from facial verification's use in work.
In my research, I've gathered data from workers describing a variety of harms. They're worried about how long their faces are being stored, where they're being stored and with whom they're being shared. In some cases, workers are forced to take photos of themselves over and over again for the system to recognize them as a match. In other cases, they're erroneously forbidden from logging into their account because the system can't match them. They have to spend time visiting customer service centres and then wait, sometimes hours, sometimes days, for human oversight to fix these errors. In other cases still, workers have described being forced to step out of their cars in dark parking lots and crouch in front of their headlights to get enough light for the system to see them. When facial verification breaks, workers are the ones who have to create and maintain the conditions for it to produce judgment.
While the use of facial recognition by state-based agencies like police departments has been the subject of growing oversight, the use of facial verification in private industry and on workers has gone on under-regulated. I implore this committee to allocate attention to these concerns and pursue methods to protect workers from the biases, failures and critical safety threats of these tools, whether it's through biometric regulation, AI regulation, labour law or some combination thereof.
I second a recent witness, Cynthia Khoo, in her statement that recognition technology cannot bear the legal and moral responsibility that humans are already abdicating to it over vulnerable people's lives. A moratorium is the only morally appropriate regulatory response.
Until that end can be reached, accountability and transparency measures must be brought to bear not only on these tools, but also on company claims that they help protect against fraud and malicious actors. Regulatory intervention could require that companies release data supporting these claims for public scrutiny and require companies to perform algorithmic impact assessments, including consultation with marginalized groups, to gain insight into how workers are being affected. Additional measures could require companies to provide workers with access to multiple forms of identity verification to ensure that people whose bodies or environments cannot be recognized by facial verification systems can still access their means of livelihood.
At heart, these technologies provoke large questions around who gets to be safe, what safety ought to look like, and who carries the burden and liability of achieving that end.
Thank you.
:
Thank you very much, Mr. Chair.
I'd like to thank all the witnesses for being present here today. I appreciate it.
I have questions for several witnesses, so I'd appreciate it if the witnesses could be brief, yet pithy, in their comments.
Mr. Jenkins, in a question that you had from my colleague, Mr. Williams, you were asked about setting up fingerprinting versus facial verification. From what I heard from Dr. Wang, you and other witnesses, they're not quite the same thing.
Can you compare the two in terms of their accuracy and how facial recognition technology is used, as opposed to fingerprinting? I'm assuming it is really just a process of trying to match up a dataset to another dataset. Is that correct?
:
Thank you very much for that.
Dr. Wang, thank you very much for your presentation. If I may suggest, I know that you only brought to our committee a couple of the problems that your research has identified. If there are others that you would like to share with this committee.... We have a common saying here that if we don't hear it or if we don't read it, we can't report on it. We would certainly appreciate it if you felt you had the time and could send us more examples of what you consider some of the limitations of facial verification.
I'd like to go back to the two big problems that you identified, which are brittleness and interpretability.
I was wondering if you could talk a bit more about the brittleness of it. Bad actors could circumvent the system, but there's also the vulnerability of people who have no intention of circumventing it, but are yet victims of the biases. I think you talked about machine learning and that all it does is extenuate the biases that would exist in society in general.
Am I correct?
:
Yes. Each of us has one face, which has its own appearance. That appearance changes a lot of times, not only over the long term as we grow and age, but also from moment to moment, as viewpoints change, the lighting around us changes or as we change our facial expression or talk.
There's an awful lot of variation, and this is a problem. What you're trying to do, of course, in the context of facial recognition, is to establish which of the people you know or have stored in some database you are looking at right now. That variability is difficult to overcome. You're always in the position of not knowing whether the image you have before you could count as one of the people you know or it is somebody new.
I think the variability is fundamental to the problem that we're discussing. Different people vary in their appearance, but each person also varies in their appearance. Separating those two sources of variability to understand what you're looking at is computationally difficult.
:
Mr. Chair, I'll happily take those 30 seconds as offered.
Mr. Chair, I think we can all agree that the technical aspects of this committee. I'm not sure we're going to get as deep as we need to go in order to get the kind of report that is going to be required out of this in the time we have allotted, so I'm going to put some very concise questions to all of the witnesses, starting with Dr. Watkins.
Dr. Watkins, based on your subject matter expertise, what would be your top legislative recommendations to this committee? We're going to be putting together a report and hope to have some of these recommendations reflected back to the House for the government's consideration.
:
Thank you so much. I would say that I have three top recommendations.
The top one would be to establish a moratorium. It's simply too unreliable for the futures and the livelihoods to which we are allocating responsibility.
The second two recommendations would involve accountability and transparency.
We need better insight into how these tools are being used; where the data is being stored; how decisions are being made with them; whether or not humans are involved; and how these decisions are embedded within larger bureaucratic organizational structures around how decisions are being made. Some kind of documentation to give us insights into these processes, such as algorithmic impact assessments, would be very useful.
Further, we need some kinds of regulatory interventions to produce accountability and build the kinds of relationships between the government, private actors and the public interest so that the relationships can be built to ensure that the needs of the most vulnerable are addressed.
Professor Watkins, I noted that in a report called “Now you see me: Advancing data protection and privacy for Police Use of Facial Recognition in Canada” that “Danish liberal deputy Karen Melchior said during parliamentary debates that 'predictive profiling, AI risk assessment and automated decision-making systems are weapons of “math destruction”', because they are 'as dangerous to our democracy as nuclear bombs are for living creatures and life.'”
Given that kind of framing of “weapons of 'math destruction'”, you noted that there's going to be an important accountability in the private sector. I note that Amazon has just had its first unionization. Hopefully, there will be some discussions around this.
What safeguards should we be putting on the private sector to ensure that these “weapons of 'math destruction'” are not unleashed on the working class?
:
That's a fantastic question. The private sector often goes under-regulated when it comes to these sorts of technologies.
There's a really fascinating model available in the state of Illinois under their Biometric Information Privacy Act. They established that, rather than having a notice and consent form, whereby users have to opt out of having their information used, it's actually the reverse, so that users have to actually opt in. Users have to be consulted before any kind of biometric information is used.
Biometric information is defined quite widely in that legislation. As far as I can recall, it includes facial imprints as well as voice imprints. This legislation has been used to wage lawsuits against companies in the private sector—for example, Facebook—for using facial recognition in their photo-identification processes.
So looking at that kind of legislation, which places control over biometric information back into the hands of users from the get-go, would be very advantageous in terms of taking steps toward putting guardrails around the private sector.
While AI, machine learning and algorithmic technologies appear to be very futuristic, very innovative and brand new, they're based on data that has been gathered over years and decades, reflecting things like institutional biases, racism and sexism.
This data doesn't come from nowhere. It comes from these institutions that have engaged, for example, in over-policing certain communities. Processes like over-policing then produce datasets that make a criminal look a certain way, when we know that doesn't actually reflect reality. These are the institutional ways in which they see populations.
Those datasets are then the very datasets on which AI and machine learning learn and they learn what the world is. So rather than being innovative and futuristic, AI, machine learning and algorithmic processes are actually very conservative and very old-fashioned, and they are perpetuating the biases that we, as a society, ought to figure out how to step forward and get past.
:
Okay, I was just asking for your perspective on that, but thank you.
I think this committee, both in this study and others, has heard a lot about the concept of consent. Certainly, when you use facial recognition on an iPhone, an android or a computer, you're consenting for your picture to be used to log in and whatnot. That is very, very different from the widespread use of scraping the Internet for images and law enforcement making a determination. That's an important differentiation.
To Dr. Khanna, in 2016 it was reported that the federal government tested facial recognition technology on millions of travellers at Toronto Pearson International Airport. What type of negative ramifications could there be for those several million travellers who passed through border control at terminal 3 at Pearson between July and December 2016 when this pilot project was running? Could you outline what some of those concerns might be in a very real-world example?
Going on with that, Dr. Watkins mentioned the benefits of one-to-one facial verification versus general facial recognition, so there is some advantage use to the technologies. As Mr. Khanna mentioned, as legislators we have to think about how we're behind the ball here. The curve is trending further ahead of us. At the same time, is there a way in which we can set up basic fundamental legislative guardrails at this point, whether they're anchored in privacy or in preventing scraping from open-source platforms, that could create a safety net, to start? We're constantly going to be dealing with novel and emerging technologies, but are there key principles we can look at in guardrail legislation that we should be considering?
I'm wondering if Mr. Khanna or Dr. Watkins would have any suggestions here.
Through you, Mr. Chair, I have one more open-ended question.
We hear a lot of talk about a moratorium. For me, my key question is about how to implement a moratorium. My key concern is actually about the relationship between private and public enforcement, that there are contracts set up in third party structures and currently there is a loophole.
To Dr. Watkins, Mr. Khanna or Dr. Jenkins, what would be key guardrails in a moratorium?
I want to thank our witnesses for their time and expertise on this important study we're undertaking.
I want to go around to all four witnesses to ask them a question following on where Mr. Green was going.
When you take artificial intelligence and machine learning, tie that in with facial recognition and then the possible application of that in the criminal justice system, will this significantly impede constitutional rights, our charter freedoms that we have here in Canada, as potentially being used under the Criminal Code?
I will start with Ms. Wang.
As we're going through this and we're hearing loud and clear on the recommendations—accountability, transparency, putting in place a moratorium until we have actual legislation in place—how do we bring forward, as parliamentarians, the proper safeguards to ensure that facial recognition is being used correctly, that bias is removed, that discrimination is eliminated, or minimized at the very least, so that we can write into the Criminal Code, the Privacy Act, PIPEDA, the guardrails we need to make sure we're not relying overly heavily on facial recognition technology, keeping in mind that there are always going to be issues around public safety and national security?
I'll go to Mr. Khanna first.
I think that each model is developed in the context of the different study that it's made by, and so models developed in Asia also have lots of biases. They are just a different set of biases than models that have been developed by Canadians or Americans.
For example, a lot of object recognition tools have shown that they are not as good at recognizing the same objects—for example, soap—from a different country than the country where the dataset came from.
There are ways to get around this, but this requires a lot of different people involved with different perspectives, because there really is just no universal viewpoint. I think there's never a way of getting rid of all the biases in the model, because biases themselves are very relative to a particular societal context.
I'm going to follow my colleague, Ms. Hepfner, on some of the questioning.
Mr. Jenkins, again, you've written about the other-race effect, which is a theory that own-race faces are better remembered than other-race faces. We know that facial recognition technology is very accurate with white faces, but its accuracy drops with other skin colours.
Could this be due to the other-race effect of the programmers, essentially a predominantly white programming team creating an AI that is better at recognizing white faces? Would the same bias apply to an FRT AI developed by a predominantly, let's say, Black programming team? What does your research show, and what are you seeing in your studies?
:
Bias among programmers could be a factor, but I don't think we need to invoke that to understand the demographic group differences that we see in these automatic face recognition systems.
I think that can be explained by the distribution of images that are used to train the algorithms. If you feed the algorithms mostly, let's say, white faces, then it will be better at recognizing white faces than faces from other races. If you feed it mainly Black faces, it will be better at recognizing Black faces than white faces.
Maybe the analogy with language is helpful, here. It matters what's in your environment as you are developing as a human, and it also matters as you're being programmed as an artificial system.
Thank you to all of our witnesses for taking the time today.
I want to leave an open question here for any of our witnesses.
Based on your responses to Mr. Green's earlier question, there seems to be a considerable amount of legislation needed before FRT is widely used.
My questions come from Richmond, British Columbia. It's home to a strong South Asian and Asian demographic. We learned from an earlier panel expert who joined us that the VPD is using FRT without a lot of oversight.
Are any of you aware of any British Columbia law enforcement agencies using FRT?
Mr. Khanna, are you aware of any of this?
Two of the points that I brought up are interpretability and brittleness. For brittleness, back actors are able to just trick the model in different ways. In the specific study I'm referring to they print a particular pattern on a pair of glasses, and through this, they can actually trick a model into thinking they're somebody completely different.
The other part is transparency. Models right now are very uninterpretable, because they have been able to pick up on whatever patterns the model has figured out as able to help it best with its task. We don't necessarily, as people, know what patterns the models are relying on. They could be relying on—
:
Thank you very much, Mr. Chair and, through you, thank you to the witnesses for your very compelling testimony today.
Just to add to the list that you have provided, I will say that in 2018 Taylor Swift used facial recognition technology to identify some of her stalkers. That was a very fascinating, interesting and, I think, complex use of technology.
I know we have been talking about moratoriums. Perhaps I will start by asking our witnesses what a moratorium would achieve in an environment in which technology and innovation occur at such a fast pace?
Perhaps I will start with Dr. Khanna.
:
Thanks for that. Basically, privacy laws and the protection of privacy laws in this instance are kind of that balance and not black or white, where you opt in or opt out. You're kind of there, baked into that cake, as you said, Dr. Jenkins.
In that case, then, we see that social media companies, for example, or other platforms build in these algorithms, the artificial intelligence that creates convenience in shopping. They purchase datasets for companies to buy their customers, basically, so that they can advertise to them. Are there any regulations that you think could be part of a potential bill of rights that would protect Canadians in the way in which their data is sold to these companies?
Does anybody want to take that on? I'm sorry. I just don't know who to address this to. It's a complex one.
:
That would be very much appreciated.
I want to ask a question. In the examples that have come about today, I guess the most “benign” use of FRT, if that's the word for it, or one of the more benign uses spoken of, is the one that many of us are familiar with. That's the facial recognition to unlock an iPhone or a mobile device. An individual has consented to this use and has supplied a photo of themselves for their convenience and for the biometric security around their own phone. On a personal level, I find a fingerprint much more convenient and easier, if the device will allow that, than a photo, and more reliable.
If this is one for which there seems be, on this panel or around the table, one of the more easily supported uses of this, are there problems, even at that level, of where a consumer is readily, or at least relatively readily, consenting to this type of use?
I'll maybe ask each of our panellists to weigh in on this for a quick moment. Would this be an acceptable use of FRT? Would this be included in the moratoriums that some are asking for?
Let me start with you, Dr. Watkins, just for a quick answer.
:
Thank you so much. This is a great question.
I urge the committee to think about consent in a context in which consent takes place. Consent can often be much more complex than it looks from the outside. It's not always a yes or a no, or “no I don't want to do this, so I'm going to go to the next alternative”. Often, there are no alternatives. Often, there are financial pressures that people are facing that force them to comply with these kinds of protocols.
For example, the facial verification that's in place in many gig companies, there is no alternative. If they don't comply with facial verification, they're simply off the app.