S4-Episode 2: At face value: Facial recognition technologies and privacy

Apr 04 2024

From unlocking smartphones to solving crimes, facial recognition technologies are re-shaping identification as we know it. In this episode, we peer into the privacy and human rights implications of facial recognition systems with technology and human rights lawyer Cynthia Khoo.

The information, opinions, and recommendations presented in this podcast are for general information only. It should not be relied upon as a substitute for legal advice. Unless specifically stated otherwise, the IPC does not endorse, approve, recommend, or certify any information, product, process, service, or organization presented or mentioned in this podcast, and information from this podcast should not be used or reproduced in any way to imply such approval or endorsement. None of the information, opinions and recommendations presented in this podcast bind the IPC’s Tribunal that may be called upon to independently investigate and decide upon an individual complaint or appeal based on the specific facts and unique circumstances of a given case.

Cynthia Khoo is a technology and human rights lawyer and most recently, a senior associate at the Center on Privacy and Technology at Georgetown Law, in Washington, D.C. She is a research fellow at the University of Toronto’s Citizen Lab.

  • How facial recognition technology works [4:09]
  • Use of facial recognition technology by government agencies [8:02]
  • Use of facial recognition technologies in the private sector [10:15]
  • Stalkerware and facial recognition technology [15:07]
  • Impact of biased algorithms on historically marginalized groups [17:40]
  • Public anonymity as an essential privacy right [22:00]
  • Facial recognition and mugshot databases, guidance for police in Ontario [25:22]
  • The option to roll back facial recognition systems [29:30]
  • Guardrails and protections in contracts with third party vendors [32:12]

Resources:

Info Matters is a podcast about people, privacy, and access to information hosted by Patricia Kosseim, Information and Privacy Commissioner of Ontario. We dive into conversations with people from all walks of life and hear stories about the access and privacy issues that matter most to them.

If you enjoyed the podcast, leave us a rating or a review.

Have an access to information or privacy topic you want to learn more about? Interested in being a guest on the show? Send us a tweet @IPCinfoprivacy or email us at [email protected].

Patricia Kosseim:

Hello, I’m Patricia Kosseim, Ontario’s Information and Privacy Commissioner, and you’re listening to Info Matters, a podcast about people, privacy, and access to information. We dive into conversations with people from all walks of life and hear real stories about the access and privacy issues that matter most to them.

Hello, listeners and welcome to Info Matters. In this episode, we’ll be looking at a technology that has its eyes on you. Facial recognition technologies are used to identify an individual or to verify that they are who they say they are using their face. The distance between your eyes, the shape of your nose, the length of your jawline, all of these can be used to create a unique digital template.

With a variety of possible applications, facial recognition is becoming increasingly prevalent. Being used by private sector companies, government organizations, law enforcement, and individuals alike. It can be used to verify your identity for things like unlocking your smartphone or tagging people in images you post on social media, or to identify tenants who live in a building, prevent unauthorized access and track the movements of visitors. Or what about retailers trying to identify potential shoplifters?

And for law enforcement, it’s a way to find missing persons solve crime and monitor large crowds more efficiently. Facial recognition technology clearly has a wide range of potential applications, but what does it mean for privacy?

Cynthia Khoo is here to help us unravel the intricacies, facial recognition technology and its far-reaching implications. She’s a research fellow at the University of Toronto Citizen Lab, and most recently a senior associate at the Centre on Privacy and Technology at Georgetown Law in Washington DC. Cynthia, welcome to the show and thank you so much for taking time out of your busy schedule to join us today.

Cynthia Khoo:

Thank you so much for having me.

PK:

Cynthia, let’s get started by getting a sense about you and your background and what led to your interest in the area of technology, human rights and privacy?

CK:

I went into law school essentially already wanting to do to technology law. I was always very interested in science fiction, so I wanted to do something cutting edge, whether it was emerging technologies or bioethics in the context of health law, for example. But it actually started with a really amazing seminar I took at UBC called the Rhetoric of Health Science and Medicine, and that was my introduction to science and technology studies, which is an interdisciplinary field that intercepts a lot and that we draw on a lot when it comes to technology and human rights law.

So once I went into law school, whatever chance I had to work at this intersection, whether it was on copyright and freedom of expression, net neutrality or affordable internet access or tech-facilitated gender violence, I jumped at that opportunity. And so that’s led to an unorthodox legal career because I really based it on what would let me work most on the things that I cared about.

Then I ended up doing this fellowship at the Citizen Lab. I did the LM at the University of Ottawa’s Law and Tech program. And then of course, as you mentioned, my most recent few years was at the Centre on Privacy and Technology at Georgetown Law, which has a really strong civil rights focus.

PK:

Your passion for all these issues at the intersection of technology, human rights, communication, and privacy law is really fascinating, which I’m pretty passionate about too. So this is going to be a great conversation. So, Cynthia, listeners may not be familiar with facial recognition technology. Can you unpack that a little bit for us and explain to us how does facial recognition technology work?

CK:

Broadly speaking, facial recognition technology is used to identify people based on metrics that are taken from their face. So as you mentioned earlier, whether it is the space between your eyes, the shape of your chin, different metrics that are taken from different set points on your face that are then used to create a face print that is then used to match up with the face prints of other faces, whether it is in a large database or whether it is a previous photo that you have submitted to the person who is running the system, for example.

So this has largely resulted in two main use cases for facial recognition. The first is when you are unlocking your iPhone, for example, or when you are going through the airport or being verified for government services when they are asking, “Are you who you say you are?” So we are matching your photo to another photo of you, and that’s known as one-to-one facial recognition or verification.

The other type, which is generally the type we talk about the most when it comes to a lot of the privacy implications is when we have a photo of you and we are trying to figure out who are you in the world? So we have a photo of you and we have a database of hundreds or thousands of photos, and that could be a mugshot database, it could be a database of social media profiles, it could be a database from anywhere that has a lot of photos. And so one-to-many recognition is when we take your photo and we compare it to that database and see if there’s a match in there so we can find out who you are.

So what happens is you have the photo of the person that could be taken from a CCTV camera, from a video, from still, from the internet. You might then process the photo to make it clear if there’s bad lighting, if there’s a weird angle, if something about the photo can be made clearer for a better match. Once you have the photo that’s ready to be processed, you then submit it to the facial recognition system and the algorithms will take a face print of that photo and then match it against the face prints they already have in that database.

What happens after the match is that it’s not a CSI process where there’s bright lights and they say, “This is the person, this is the guy.” What happens is you actually get a range of photos. So you basically get a photo lineup where you get five to 10 to 12, however many photos of potential matches or likely matches. And then different types of facial recognition systems will add their own design features to indicate how confident they are of these matches.

So some of them might be color coded. Some of them might have percentage confidence levels. Some of them might rank each photo. At that point, the human analyst has to decide which of those photos do they think most likely matches the probe photo of the suspect, let’s say in the criminal investigation. And then it’s after that point that they would then forward either that one photo that they’ve decided is the match or the top five or top 10 photos to the investigative team. And of course, it will differ from system to system and across different contexts. But I think in the most popularized context when we talk about facial recognition and why so many people are worried about it, this is basically the process that it goes through.

PK:

Wow, that was an excellent explanation. And as you said, it’s not an infallible technology. Right?

CK:

Exactly.

PK:

In your explanation, you were referring to one example in the law enforcement context where this technology is used in connection with investigations, for instance. But before we zero in on that one, I want to talk about other users of this technology. How are they using and deploying this technology and for what purposes?

CK:

There are several examples of non-law enforcement, government agencies using facial recognition to various degrees of success or outcry. One example is in recent years, the federal tax agency in the United States rolled out a facial recognition system as part of filing tax returns that year. But due to massive public outcry and pushback, they decided to roll that system back. Here closer to home, there has been a series of immigration cases in Canadian law where the federal immigration department has been alleged to be using facial recognition technologies to revoke refugee status from people in Canada where applicants will have successfully applied and then weeks or years later, they will be investigated and someone will come to them and say, “We actually found a photo of you that says, “This is your identity and not the identity you told us. And therefore you don’t meet the criteria for refugee status and therefore you have to leave the country.”

And so in these cases, it’s very interesting because they are saying that facial recognition technology must have been used because where did the photos come from? Why are they claiming that it is a match with these differing identities? And if it was, was it reliable? What were the other photos? What was the data used? And that is a huge due process issue because if people cannot even know whether the technology is used in the first place, then how can they do anything about it or correct errors, especially given how much we know about how unreliable and biased the technology can be.

PK:

We’ve canvassed the use of facial recognition technology by law enforcement, by other government institutions including immigration. But what about private sector? What’s its interest to use facial recognition technology?

CK:

I’m very glad you asked this because so much of the attention on facial recognition technologies has been on its use by law enforcement and by government agencies, but it is seeing a lot of use in the private sector as well. Some examples of private sector use of facial recognition have been the Cadillac Fairview mall in Alberta where their mall directories was found to have had facial recognition technologies embedded in them, which was then found to have been a violation of their provincial privacy law.

Recently in the United States, Madison Square Gardens caught a lot of media attention because it was found to have had created a list of lawyers who worked at firms that were representing people who were currently in lawsuits against Madison Square Gardens, and they made a blacklist of all those lawyers, even if the lawyers were near the case. If you worked at a firm, you were blacklisted. And a lawyer found this out because they tried to enter for an event and someone came up to them and essentially said they weren’t allowed to enter because they were on this list, and that was due to their facial recognition system.

What’s notable about these particular cases is just the enormous amounts of discretion and opacity that these private entities have with which to deploy facial recognition technology. Nobody knew they did it until somebody happened to find out if they didn’t have to seek prior authorization for anything. But I think we are starting to see pushback. So for example, Fight for the Future, which is an internet freedom nonprofit in the United States, did this incredible campaign in response to the Madison Square Gardens incident, got over a hundred musicians and artists to sign onto a statement saying that they would boycott any concert venue that was using facial recognition technology.

So we’re already seeing people responding to this because it’s such a visceral topic, because it’s people’s faces. If you lose your credit card or your driver’s license, then you can get that replaced, but you can’t really change your face. And so I think it’s an issue that really strikes people with a lot more immediacy than necessarily some other privacy issues do.

PK:

Well, I’m a lawyer and I’m sure glad facial recognition technology wasn’t around when I went to see Billy Joel at Madison Square Garden because it was an amazing concert and I would’ve hated to be refused entry. But on a more serious note, it is worrisome that these technologies are increasingly accessible, directly accessible to individuals who can use this software. This is where we’re starting to see some very nefarious uses of these technologies, and I was hoping you can talk to us a little bit about your research into some of the individual uses that are made of FRT.

CK:

Absolutely. Kashmir Hill’s book on Clearview AI had a lot of striking details in it. Before Clearview became a household name, when its founders were pitching investors, they were freely giving an early version of the app to these investors to use with whoever they wanted. So there’s a passage in the book that talks about how investors are essentially showing it off at parties on dates with family members, where the special circle of people, by virtue of their money and power, had early secret access to this incredibly privacy-invasive act that they could just use as a party trick, I think the book called it.

Another example is that there are multiple instances of people who are able to cobble together facial recognition technologies online just through accessing open source, code databases and then end up using it or distributing it to people through private channels, through chat forums, or just online to anyone who wants it, where they will then use it for nefarious ends.

So for example, users at one point were using it to uncover the identities of actresses in porn videos that they’re watching or they’re using it to uncover the identity of sex workers, to shame them or to find out if someone they knew whether an acquaintance or a colleague or a classmate had racy photos or nude photos online. So that level of violation has an incredibly gendered aspect.

Several years ago, I co-authored this report on stalkerware and stalkerware apps are a type of mobile app that you can get anywhere, Google Play Store, Apple App Store that you can install on somebody’s phone and will let you track all of their activities on their phone, whether it’s their text messages, their calls, their call history, remotely tracking them and monitoring them through their mic or through their camera, access to their social media platforms, their private messages on social media, what Wi-Fi networks they use, where they are, their GPS location.

So just a really extraordinary range of data that someone now has access to. These apps can either work covertly, so the person never knows the app is even there on their phone, they don’t know what’s happening. Or in many cases, they actually can be used openly as a form of power and control over that person. So in the context of intimate partner violence, for example, where the abuser wants the person to know that they are following and tracking their every step with this level of detail.

So it results in a whole range of harms, financial, professional, emotional, physical, psychological, let alone incursions on their human rights because you can imagine what that does to somebody’s privacy, to their freedom of expression if they know that every single thing that they’re saying is going to be monitored by this person. And so on one hand you have these stalkerware apps and then you have people who are building out these facial recognition technologies.

So you can imagine that it’s not that far a leap for people to end up using the two in combination in different scenarios because imagine if you saw someone on the street or at a bar who you thought was attractive and then snap a photo of them and you instantly have access to all of their social media profiles and their name and possibly their workplace or maybe their home address depending on what is online.

Even though we may think of technology issues in terms of, okay, well, now we’re thinking about this topic and then we’re thinking about this topic, there’s nothing that stops all of the topics from merging together to impact one person’s life. And so I think we have to continually be aware of that as well when thinking through these issues.

PK:

So you’ve talked a lot about harms to individuals. What about harm to groups? How are groups or communities, particularly marginalized groups or racialized groups potentially impacted by facial recognition technologies?

CK:

A lot of studies have been done on how facial recognition algorithms end up being biased against certain historically marginalized groups. So for example, Dr. Joy Buolamwini co-authored a study called Gender Shades that showed how when it came to facial classification algorithms, it was the most inaccurate for darker skinned women versus any other demographic. And so if you applied that in real world situations such as policing or accessing government services, then you can imagine how that would disproportionately impact the people whom these algorithms worked the worst on.

And something she said in her book, Unmasking AI really struck me, which was that for one of the algorithms that she tested, only 4.4% of that data set was women of color. So she said that it would be possible for an algorithm on that data set to misidentify every single woman of color, but if it was successful on all the other demographics, then that algorithm would still be considered 95.6% accurate across the board, and nobody would know it had all of these internal disparities unless you actually dug behind the algorithm into these different demographics.

There are other tests that have been done on different types of facial recognition algorithms showing that they can tend to up to even a hundred times more inaccurate for Black people, for Asian faces or for indigenous people as compared to white men. And so again, when you think about how this would play out, that has enormous repercussions for groups based on their demographics because these are groups that are already, we know over policed, over criminalized, over incarcerated, over targeted for police brutality, for discrimination.

And so a technology like this would only accelerate all those biases and systemic discriminatory outcomes. But with the additional layer of opacity where people can claim that, “Well, because we’re using a technology and because it’s scientific and mathematic, therefore it can’t be biased.” But of course we’ve seen by this point that it very much can be. All we have to do is look at the wrongful arrests that have already occurred in the United States as a result of biased facial recognition algorithms.

In particular, we know about Robert Williams who was arrested in front of his two daughters even though he had an alibi for the crime that was committed at the time. We know about Michael Oliver who spent 2.5 days in prison and Nigel Parks who spent 10 days in prison, all of them without being told what they were in for or before they were able to ascertain the actual photos or what was actually used as part of that facial recognition process.

So it would not surprise me if this has also occurred or is going to occur in Canada if we were to roll out this technology more and combined with the racism that is endemic to the technology industry as well, when you look at the demographics of who is in that industry and who is creating these technologies. And so I think people just have to be careful that when they’re deploying these technologies, it may seem like it works across the board when you’re looking at the public in general, but privacy harms cannot only be thought of as how does it impact the public in general because you have to see which specific parts of the public it’s impacting because some of them will always be impacted more and more harshly with worse outcomes. And it always tends to be the same groups because of historical discrimination, systemic discrimination,

PK:

As you know, many people say this. I don’t believe it. I know you don’t, but I’d love for you to refute it in your own words, that old adage, if you have nothing to hide, then what’s the problem? Why do we worry about surveillance technologies like facial recognition when we’re in public spaces? So what should it matter and why should we care?

CK:

That’s a great question. Having anonymity in public is an essential privacy right. And the Supreme Court of Canada has recognized this, so it’s constitutionally protected. And a lot of our privacy laws were created on the basis of assuming that people generally do enjoy privacy of anonymity when they’re walking around in public. When that idea was formulated when you were out in public, people did not have the ability to take a photo of you and then instantly know everything about you. So that’s the first thing. The second thing is thinking about all the things that could happen if you no longer had anonymity in public.

So as just a few examples, the stalking example that we mentioned earlier, say someone has survived and escaped a situation of intimate partner violence or family violence, and then someone is able to track them down after because they were randomly spotted at cafe and then they were identified and it somehow got back to the person they’re trying to escape. Undercover intelligence agents, for example, could potentially be identified if you got into a random altercation with someone.

So imagine a typical road rage scenario where normally it would happen and then you drive away and generally one hopes that it’s done. But what if the person who was in the state of road rage could take a picture of you and follow you home or know where you worked, and then if they happen to be a vindictive person, pursue you there. There’s no natural end to it anymore because physical space is no longer a limitation.

When it comes to civil liberties. Think of not being able to attend a protest without the police or government instantly being able to know who you are or your employer, people potentially being able to retaliate against you for speaking out and what you believe in. So that’s the importance of why we need to maintain privacy in public.

As for the nothing to hide, nothing to fear argument, I would say two things. The first is that just because you have nothing to hide doesn’t mean other people do not have things that they need to hide, which is not because they’ve done anything wrong, but because there are people after them or because they’ve been wrongfully persecuted for their political views or for their actions or because they’ve been historically discriminated against.

Essentially, it’s not always about you, it’s to protect other people. And just because you might not need certain protections doesn’t mean it’s right for you to get in the way of ensuring that the people who do need those protections have them. The second point is that you don’t have anything to hide for now. That can change very quickly. You don’t know what will happen in the future. Maybe a government will come into power that disagrees with how you live your life or a part of who you are.

And so it’s not always necessarily the fact that you have nothing to hide, it’s more about who is in power and how might they use that power against you, which is why we need these protections in place for everyone.

PK:

I was thinking how many photos you and I or any person must be in inadvertently having just walked by so many of these tourists taking selfies or group photos. And we must be in hundreds if not thousands of these photos somewhere. And when you think that those are recognizable, and if you pull them all together, they could tell a real story about not only the people who were intended to be in that photo, but people like you and I who are kind of just captured in the photo frame who happened to be there in that instant.

It is a very concerning development and continual encroachment on our right to anonymity. And I think you’re absolutely right to point that out. Cynthia, you know that we recently issued guidance for police in Ontario who are using or intending to use facial recognition technology in connection with mugshot databases, and those guidelines were intended to focus in on this one specific use case, facial recognition technology.

I was wondering what you happen to think of those guidelines and whether you think they’ll be helpful for law enforcement contemplating using facial recognition technology to facilitate or accelerate their search through mugshot databases, to identify potential suspects.

CK:

Absolutely. I do think it will be incredibly helpful in part because what I really liked about it is how specific and practical it is. A lot of times when it comes to discussions about technology and society and human rights and how to regulate them or what to do about it’s very easy to get stuck in the place of talking about abstract policy or what should be done conceptually, which obviously we have to start with that point. But eventually it has to get down to the nitty-gritty of when the police officers facing the case in front of them, what do I actually do? What is this first step? What is the second step?

And I think this guidance does bring things down to that level. It’s talking about the specific considerations they have to think about the particular documents that they have to prepare, whether it’s a privacy impact assessment or logging the ways that they are using their facial recognition technology. And the fact that it’s confined so explicitly to facial recognition technology in the context of mugshot databases is also great. I also think it was really great how there was that continual emphasis on privacy, equality, civil liberties and other human rights as well as transparency and accountability.

The fact that it stresses that use of these tools has to be necessary and proportionate. And also the fact that it pays attention to the role of third party facial recognition technology vendors. And that is such a key point because private companies are not held to the same standards. They’re not necessarily bound by the constitution the same way that law enforcement entities are. And so in cases where you have law enforcement relying on third party private vendors, there has to be a lot of attention paid to that relationship and to regulating it and making sure that you don’t end up in a situation where private vendors can collect all of this data, not subject to any constitutional safeguards, and then sell it or share it without a warrant to law enforcement who then uses it in their criminal investigations, which should be subjected to constitutional safeguards.

And so that whole system eventually results in what could be considered an end run around section eight of the charter, protecting our right to privacy. And so without closing in on that particular relationship, it just results in a huge loophole that should be caused for a lot of alarm. And then I would just, I think, uplift the idea of always being able to keep on the table the idea of rolling back a system or canceling a system or abandoning a system even before it gets deployed because if that is not an option, then a lot of consultation and a lot of cautionary words are basically in vain.

So I think the people should always keep front of mind that it always has to be an option to undo something. And that just because we’ve been put on a certain path does not mean we are relegated to be stuck on that path forever. We can always course correct.

PK:

It’s interesting that you picked up on the role of the third party vendor of these technologies and the importance of scrutinizing the source of the technology and how it’s developed and the lawfulness of that technology. It’s especially in light of the recent Supreme Court of Canada decision in R versus Bykovets, which as you know, really focused on the third party and adding this third party to our constitutional ecosystem, which has taken what used to be a horizontal relationship between the individual and the state and has now made that a tripartite relationship.

So I think the point you make is a really important one. I also think that the fact that you’ve picked up on the granular nature of the guidance is very helpful because that’s exactly what law enforcement had asked for was to focus on this one use case and provide more granular, practical guidance to help them as they’re contemplating deploying this technology.

If I may, I want to ask you just one last question before I let you go. As you know, our strategic priority in particular, the one that focuses on next generation law enforcement is of keen interest to our office. Our goal there is to contribute to building public trust in law enforcement by working with relevant stakeholders to develop the necessary guardrails for the adoption of new programs or technologies that aim to do both, protect public safety, but also Ontarians privacy and access rights.

So in terms of our future work in this focus area of priority, what would be your advice for us? How can we work to advance this strategic priority in your view?

CK:

I think most of my suggestions to this would be essentially reiterations of a couple of things I said earlier for facial recognition technologies, but which I think apply to next generation law enforcement as a whole. So for example, focusing on that relationship between law enforcement technologies and third party commercial vendors, it would be amazing to see if IPC guidance on what public agencies and law enforcement entities should do when it comes to entering into contracts with or when they’re considering relying on these private commercial entities to ensure that our privacy rights are still protected and that protection still meets certain standards, for example, ensuring that particular obligations are included in any contracts that they have and having a way to actually enforce those contracts with consequences to those third party vendors.

And then another thing that I would suggest in terms of building public trust is going back to what I said earlier about ensuring that the option to roll back or abandon or undo is always on the table. And so I think that aspect would go a long way towards building public trust as well.

PK:

Sometimes that may not involve permanently taking it off the table. Sometimes that might involve recommending that they hit the pause button, for instance, until many of these guardrails can be erected and these issues could be ironed out. So it’s not always a permanent no. It might be just a temporary pause and I think that’s an excellent suggestion. And to your point about guidance for third-party outsourcing arrangements or third-party contracts, it’s funny you should mention that because we’re just about to release some guidance we’ve been developing to help public institutions ensure that their privacy and accountability obligations continue throughout that contracting relationship through appropriate provisions in their third-party contracts. So stay tuned for that.

Cynthia, thank you so much for taking the time to join us today and for sharing your views on facial recognition technologies. You’ve obviously thought long and hard and read a lot and written a lot about the subject, so we were very lucky to spend some time with you today.

CK:

Thank you so much for having me. I really look forward to reading that guidance and it was a pleasure speaking with you.

PK:

Well, as we’ve heard, facial recognition is a timely and somewhat controversial topic, and Cynthia has helped explain some of the key concerns associated with this technology in a simple and straightforward way. For listeners who want to learn more about facial recognition and next-generation law enforcement, I encourage you to visit our website at ipc.on.ca. You could also read our recent guidance on police use of facial recognition technology in connection with mugshot databases. And you could always call or email our office for assistance and general information about Ontario’s access and privacy laws. Well, that’s it folks. Thanks for listening to this episode of Info Matters. And until next time.

I’m Patricia Kosseim, Ontario’s Information and Privacy Commissioner, and this has been Info Matters. If you enjoy the podcast, leave us a rating or review. If there’s an access or privacy topic you’d like us to explore on a future episode, we’d love to hear from you. Send us a tweet @IPCinfoprivacy or email us at [email protected]. Thanks for listening, and please join us again for more conversations about people privacy and access to information. If it matters to you, it matters to me.

This post is also available in: French

More Info Matters Episodes


Media Contact

For a quick response, kindly e-mail or phone us with details of your request such as media outlet, topic, and deadline:
Telephone: 416-326-3965

Social Media

The IPC maintains channels on Twitter, YouTube and Linkedin in its efforts to communicate to Ontarians and others interested in privacy, access and related issues.