What happens when an algorithm gets it wrong
The odd thing about Robert Williams's false arrest wasn't that police used face recognition to ID him. It's that they told him about it.
In the first of a four-part series on FaceID, host Jennifer Strong explores the false arrest of Robert Williams by police in Detroit. The odd thing about Willliams’s ordeal wasn’t that police used face recognition to ID him—it’s that the cops told him about it. There’s no law saying they have to. The episode starts to unpack the complexities of this technology and introduces some thorny questions about its use.
We meet:
- Robert and Melissa Williams
- Peter Fussey, University of Essex
- Hamid Khan, Stop LAPD Spying Coalition
Credits:
This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley, and Emma Cillekens. We had help from Karen Hao and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. Our technical director is Jacob Gorski. Special thanks to Kyle Thomas Hemingway and Eric Mongeon.
Full episode transcript:
Robert Williams: I was completely shocked and stunned to be arrested in broad daylight, in front of my daughter, in front of my wife, in front of my neighbors. It was one of the most shocking things I ever had happen to me.
Jennifer Strong: That's Robert Williams. He's describing what happened outside of his home in an affluent suburb of Detroit called Farmington Hills back in January.
Jennifer Strong: The day started like any other.
Robert Williams: It was just a boring Thursday.
Jennifer Strong: He got up, went to work. But then things got weird.
Robert Williams: On the phone with Melissa around 4.
Jennifer Strong: Melissa is his wife. They're in the middle of a call when he hears the other line.
Robert Williams: I click over. I'm like, "Hello?"
"Robert?"
And I'm like, "Who is this?"
"You need to come down and turn yourself in."
"Who is this?"
"Officer somebody from the third precinct."
"And I need to turn myself in for what?"
So he was like, "I can't tell you that."
And I'm like, "Then, I can't come down."
"Well, If you come down, it'd be much easier on you. You don't want us to come out to your job, do you?"
At this point, I think it's a prank call.
So, I'm like, "Look, man, if you want me, come get me. I'll be at home. Bring a warrant." And I hang up on him.
Jennifer Strong: Melissa's at home waiting for her mom and daughter, and she goes to greet them when they pull in.
Melissa Williams: And as I was walking through, I looked out and the cop car was outside.
And I said, "Oh, so it wasn't a prank call. There really are people here."
And they came to the door.
I answered it.
And they kind of stuck their foot in the door and said, "Send Robert out."
And I said, "He's not here."
And they said, "We just saw him come out of that van."
And I said, "That was my mom. He's not here."
Jennifer Strong: Clearly, something is very wrong, but they don't know what it is.
Robert Williams: There's gotta be a mistaken identity or something. I don't know why Detroit Police are at my house.
Jennifer Strong: Turns out they were there because facial recognition software had wrongly matched his driver's license photo to security camera footage of a person stealing watches.
Robert Williams: I pull in the driveway here, pull up in my regular spot, hop out. By the time I closed the door, the car is in the driveway blocking me in. And they parked this way, across my driveway as if I'm going to back out or something and try to take off.
Melissa Williams: As soon as he shut the door, they were right on him.
And I was in here still because I had the girls, and they were already starting to cuff him by the time we got out there.
Jennifer Strong: He told his daughter to go back inside, that the police were making a mistake, and he'd be right back. But he wasn't right back.
The police took him into custody, and he spent the night in jail. He still had no idea what was going on. And he was angry.
But he says, as a black man, he had to consider what could happen if he let that show. So he stayed calm–and he waited.
The next morning, officers showed him some photos of a man stealing watches. Except those photos weren't of him. They were someone else.
Robert Williams: "So That's not you?"
I looked. I said, "No, that's not me."
He turns another paper over. He says, "I guess this is not you either?
I picked that paper up, and hold it next to my face. And I said, "This is not me."
I was like, "I hope y'all don't think all black people look alike."
And then he says, "The computer says it's you."
If he'd have just brought the picture with him, he could have looked it up and down, and he could have left and say, oh my bad. I didn't mean to bother you.
Jennifer Strong: What's unusual about this story is not that face ID was used to find a suspect. What's unusual is Robert Williams was told, because police aren't required to disclose that. Facial recognition isn't regulated. Not how it's used by law enforcement. Not how it's used by employers.
I'm Jennifer Strong. And this is episode one of a new series exploring what happens when everything around us gets automated. We're kicking things off with a four-part look at facial recognition and policing. We'll meet people building this technology, fighting against it, and trying to regulate how it gets used.
Jennifer Strong: Think of it this way. Facial recognition is being used as a search engine for criminals. And your face is the search term.
By 2016, the faces of half of all US adults were believed to be stored inside systems police use to name suspects. Some refer to it as the perpetual lineup But the nation may be at an inflection point, both in its relationship with policing, and with this technology. In June, tech giants Amazon and Microsoft put a pause on selling their face ID products to law enforcement. IBM stopped selling it altogether. Then, New York City passed a bill providing oversight of all surveillance technologies, despite opposition from the NYPD. And after the wrongful arrest of Robert Williams came to light, Detroit Police say they'll only use face ID to investigate violent crimes. And they'll do it with still photos because those are more likely to produce an accurate match. But is it enough?
Peter Fussey: OK, so at the moment we're in East London, in a place called Stratford.
Jennifer Strong: I took a walk with Peter Fussey back in February before the pandemic.
Peter Fussey: Which is historically been an area of a lot of deprivation, which had an awful lot of investment just before the 2012 Olympics, which was staged here.
Jennifer Strong: It's a spot where the London Police testing cameras that match faces with identities in real time. You are part of a team working on a national surveillance strategy. Is that right?
Peter Fussey: So we're part of a research project. We look at emerging technology and the human rights implications separate to that. I also work with the surveillance camera regulator in the UK, and I lead part of his strategy on human rights.
Jennifer Strong: He studied technological surveillance for more than 20 years.
Peter Fussey: I started looking at closed circuit television, CCTV cameras. They're very familiar on the street, CCTV. I was always surprised by how little people seem concerned about it. I'd be, you know, make an odd case for why we should regulate. And it was largely met with indifference. And facial recognition seems very different. It has caught the public imagination. It Is in the media on a daily basis.
Newscaster: Well, your face can tell people a lot more than you might think. In a new world of facial recognition technology, your every move can be tracked.
Newscaster: These shoppers don't know it, but a computer is scanning their faces, comparing their features to those of known shoplifters.
Man interviewed: It's horrible. It's an invasion of privacy.
Woman interviewed: This technology is being installed with zero public oversight and accountability.
Second woman interviewed: We're being bullied into taking our picture in order to get our keys.
Newscaster: Even pop star Taylor Swift secretly deployed the technology to root out stalkers.
Jennifer Strong: But while the public outcry has led some places to ban the technology, including tech hub cities like San Francisco and Cambridge, Massachusetts, where MIT is based, London's police tested a highly aggressive version of it in 10 different public spaces.
Peter Fussey: What you see in the UK is live facial recognition, which means that there is a database of individuals the police are interested in. Then, as the public walks past the camera, each of those people is scanned and then matched against that database. Here, you are enacting surveillance before, you know, any offense.
Jennifer Strong: It's one thing for a police department to hold up a photo of someone to try to identify them in a system. And it's something very different to have live identification happening in real time.
Peter Fussey: Yeah, that's exactly right. And I think it's a really important part of the debate that often gets lost.
The other difference is the existing CCTV cameras, or low-tech, analog human surveillance, doesn't involve the capture, processing, and maintenance and management of biometric data, which is a special category of data, and is universally seen as an intrusive practice.
Jennifer Strong: And that special category of data has to be safely sorted and stored. And as he points out, no human can possibly process the volume that's being captured by these systems.
Peter Fussey: That raises some serious questions about how proportionate that is, for instance. How necessary it is to biometrically scan tens of thousands of people just cause you're interested in talking to somebody. Now, if it's a known killer on the loose, or the example that's always given of, you know, a terrorist attack about to happen, then that's different. You can make a much stronger necessity and proportionality argument around that, but less so if it's just somebody that you're interested in talking to about an incident of antisocial behavior or something like that.
Jennifer Strong: Well, the other question, when you say humans can't process that information, but also it's unclear whether the technology can yet either. What happens if you're falsely identified?
Peter Fussey: If the camera says that you are a suspect, you're somebody on their watch list, how many times do we know it's correct? In our research, we found it was correct eight times out of 42. So, on six full days, sitting in police vans, eight times.
Jennifer Strong: He did the only independent review of these trials ,and he found it was accurate less than 20% of the time.
Peter Fussey: It may work brilliantly in lab conditions. But, you know, outside, like an environment we're in now, the light is fading, it's winter light. Much of the intelligence picture for a lot of the offenses are linked to the nighttime economy. So facial recognition works less well in low light, and all sorts of issues around that.
Jennifer Strong: It's also less effective across different demographics.
Peter Fussey: So, not just ethnicity or race ethnicity, but also gender. And that then folds into a whole issue around transgender rights and age as well. Children's faces, for example, give off less information than a face of somebody in their 40s like myself. Why that's important is if the police are using a technology, which is not as effective for different groups, then, unless they are aware of those limitations, and unless they can somehow mitigate against them, then it's impossible to say that they are employing a technology that is compatible with human rights.
Jennifer Strong: How do you align a human rights and a surveillance strategy?
Peter Fussey: We often think about things like security as being oppositional to human rights.
What, of course, the first responsibility of states under the UN declaration of human rights is for states to provide for the safety and security of its citizens. So, there's often this framing of liberty versus security, which myself and my colleagues would find quite unhelpful. You know, you can have both, and you can have neither.
Jennifer Strong: We make our way to another spot he wants to show me.
Peter Fussey: Just at the end of this bridge, you can see a pole with some cameras on it. If you were walking along this bridge towards those cameras, you would get to a point where there was a sign saying that facial recognition was in operation. Now, if you wanted to continue your journey, you would have to walk past those cameras. However, the police were saying this was a trial. So if you didn't want to be part of that trial, you had to turn around. And to get to the same point beyond those cameras would take about a 20 minute detour.
Jennifer Strong: OK, so this part is really important.
Peter Fussey: Here, there's no real meaningful consent. If you withdraw consent because you don't want to be on the camera, then you should be able to withdraw consent without penalty. Otherwise it's not consent.
Jennifer Strong: Something else. When you walk down the street, are you aware of the times you cross from a public sidewalk onto concrete that's owned by a business? Did you know your rights to privacy might be different in just a few steps?
Peter Fussey: So here, where we're standing, outside Westfield Shopping Mall, is private space. But we feel it's public. There's lots of people around here. It has the sense of a public space. What happens, though, is if you walk 30 meters to our left, you're in a public area, and all the cameras are owned by public authorities. And if you walk 30 meters to our right, they're owned by private companies.
Jennifer Strong: What about the one over your head?
Peter Fussey: Which one? That one? So yeah, this is owned by a private company.
Jennifer Strong: The difference comes back to a simple point. Public groups are accountable to the public.
Face ID works by mapping out the unique set of measurements between your features, like the spacing between your eyes, the length of your nose and the curvature of your lips. The earliest systems were invented in the 1960s. But for decades, the technology wasn't really useful. Then, in the early 2000s, local law enforcement in Florida created a statewide face recognition program. A decade after that, Facebook invented a new way to start recognizing and auto-tagging people in photos, rapidly improving face recognition to what we have today. Now, it's widely used in airports and by police, but there's little transparency about what systems are used or how.
Hamid Khan: Algorithms have no place in policing. I think it's crucial that we understand that because there are lives at stake.
Jennifer Strong: Hamid Khan is an activist fighting against the use of surveillance and many other technological tools used by Los Angeles Police.
Hamid Khan: Anytime surveillance gets legitimized, then it is open to be expanded over time, and has historically been used to trace and track and monitor and stalk particular communities,
Communities who are poor, communities who are Black and brown, communities who would be considered suspect, queer, trans bodies. It's a process of social control.
Jennifer Strong: Khan created the Stop LAPD Spying Coalition, a group he describes as fiercely abolitionist.
He doesn't think restricting the way police use face ID will work. And so, during what could best be described as a tsunami of adoption, with debate mostly focused on best practices, his focus is on getting these technologies banned. And it's been successful. Several data policing and predictive policing programs in Los Angeles ended after public and legal pressure from his group. To Khan, part of how we got to this moment is by changing the way we define and police suspicious activity,
Hamid Khan: The definition is that it's observed behavior reasonably indicative of pre-operational planning of criminal and or terrorist activity. So, you're observing somebody's behavior, not a fact, but a concern that a person may be thinking of doing something wrong, right? So, this is now going into that, speculative and hunch-based policing is real.
Jennifer Strong: What we do know, thanks to academic and government research, is facial recognition works best on white men.
Joy Buolamwini: Hi camera. I've got a face. Can you see my face? No glasses face? You can see...
Jennifer Strong: That's MIT researcher, Joy Buolamwini, giving a TED talk.
Joy Buolamwini: So what's going on? Why isn't my face being detected? Well, we have to look at how we give machines sight. So how this works is you create a training set with examples of faces. This is a face, this is a face, this is not a face. And over time you can teach a computer how to recognize other faces.
Jennifer Strong: However, if the training sets aren't really that diverse, any face that deviates too much from the established norm will be harder to detect, which is what was happening to me.
In 2018, she led a groundbreaking study showing that commercial face recognition systems repeatedly failed to classify dark-skinned women.
A year later, a major report on face ID from a federal agency called NIST, or the National Institute of Standards and Technology, found some face ID algorithms were up to a hundred times more likely to falsely match photos of people of color. But even if these systems can be engineered to reach perfect accuracy, they can be used in dangerous ways. And these problems go deeper than just skewed data and imperfect math.
Hamid Khan: Technology is not operating by itself. From the design, to the production, to the deployment, there is constantly bias built in. And it's not just the biases of the people themselves. That is only one part of it. It's the inherent bias within the system.
Jennifer Strong: Next episode, would it be surprising that photos of you, including some you've maybe never seen, are used by companies to build facial recognition systems?
Hoan Ton-That: On Twitter, do you remember this photo at all?
Jennifer Strong: No. I didn't know that was taken and I, I looked very...
Hoan Ton-That: You do. You looked very serious in that one.
Jennifer Strong: In part two, we meet the founder of one of the most controversial companies working in this space, Clearview AI's chief executive, Hoan Ton-That.
Hoan Ton-That: When we were building our facial recognition technology, we explored many different ideas in many different sectors, from private security to hospitality. When we gave it to some people in law enforcement, the uptick was huge. And they called us back the next day and said, we're solving cases. This is crazy. In a week's time, we had a really thick booklet.
Jennifer Strong: This episode was reported and produced by me, Tate Ryan-Mosley, and Emma Cillekens. We had help from Karen Hao and Benji Rosen. We're edited by Michael Riley and Gideon Lichfield. Our technical director is Jacob Gorski. Special thanks to Kyle Thomas Hemingway, Eric Mongeon, and to the ACLU for sharing their recordings of Robert Williams.
Jennifer Strong: Thanks for listening. I'm Jennifer Strong.
Keep Reading
Most Popular
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Google DeepMind used a large language model to solve an unsolved math problem
They had to throw away most of what it produced but there was gold among the garbage.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
10 Breakthrough Technologies 2024
Every year, we look for promising technologies poised to have a real impact on the world. Here are the advances that we think matter most right now.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.