Facial Recognition Bias is a Big Problem: Lawmakers Must Move

What’s the best way to go about limiting the scope of facial recognition products.

Amazon, as a company, has a bit of an addiction when it comes to touting its own consumer products.

That is especially true in the case of the company’s Rekognition offering.

Rekognition is Amazon’s revolutionary (or so it says) facial recognition system.

The company believes it is very easy to use and simple for the vast majority of users.

Amazon also doesn’t shy away from encouraging its customers to first detect and then analyze AND then compare people’s faces to accomplish a wide variety of tasks related to,

  • People counting
  • User verification
  • Use cases for public safety.

However, the American Civil Liberties Union released a study this past Thursday in which it mentioned that the facial recognition technology from Amazon actually managed to confuse images of a total of 28 Congress members with mug shots that were publicly available.

So why is that a problem?

The technology is new and Amazon is doing its best to improve it. Should we, as people, go a little easy on Amazon for failing to come up with the perfect image recognition system?

Well, the answer isn’t simple.

While it is true that Amazon is just starting to develop image recognition systems that can have commercial applications.

But it is also true that the company has actively marketed its Rekognition offering to many United States law enforcement agencies.

Now, with that in mind, we feel that Amazon’s image recognition system is simply not accurate enough for the company to be offering it to law enforcement agencies in the country.

The American Civil Liberties Union study also went ahead and illustrated the problem of racial bias which has plagued almost all facial recognition systems present today.

Jacob Snow, an American Civil Liberties Union attorney, recently said that nearly 40 percent of all of Amazon Rekognition’s false positive in their tests were actually related to people of color.

That was made more interesting with the fact that people of color made up a paltry 20 percent of the entire United States Congress.

He also mentioned that people of color in the United States already had to deal with disproportionately harm-inducing practices of the police.

Thus, for Jacob, it should not be hard for anyone to see how image recognition systems such as Amazon’s Rekognition could actually exacerbate that problem.

This shouldn’t come as a surprise to anyone who has followed facial recognition technology news and issues for the past couple of years.

All modern facial recognition technologies of today have this strange difficulty in detecting skin tones which are dark.

It is, in fact, a well-established facial recognition system problem.

Back in February of this year, Timnit Gebru from Microsoft and Joy Buolamwini from MIT Media Lab published their findings about how the facial recognition and analysis software offerings from the likes of Face++, Microsoft and IBM had a much harder time in successfully identifying the gender of the person in a given image if the person had a darker skin tone when compared to a person with a lighter skin tone.

Then, back in June, Inioluwa Raji (from the Algorithmic Justice League) and Buolamwini evaluated Amazon’s Rekognition offering again.

Again, they found similar problems with built-in racial bias.

Since then media reports have revealed that Amazon’s Rekognition technology even failed to get Oprah right.

Seeing all these results, Buolamwini went ahead and wrote a letter to Jeff Bezos, the CEO of Amazon.

He mentioned in the letter that given what the community knew about the biased present and history of policing, his team had found the performance metrics of Amazon’s facial recognition and analysis technology as concerning.

Especially in real-world pilots.

Moreover, he said in the letter that he had decided to join the ever growing chorus of dissent in calling technology companies such as Amazon to put an end in equipping the United States law enforcement agencies with current facial recognition and analysis technology.

It may come as a surprise to some that the Amazon Rekognition system has already made its way to Washington County in Oregon.

And the office is actively using it.

Recent media reports have also revealed that the Orlando, Florida police department has also given the green light to resume a pilot program in order to test the efficacy of Amazon Rekognition.

However, the city has said that for the present moment, the police department will not use any images of the public for any testing purposes.

The Amazon Rekognition system would only have access to images of police officers working for the Orlando police department who had volunteered to actively participate in the aforementioned test pilot.

The program will only use these images.


But the problem is, these are just the company’s clients that are in the public domain.

With regards to questions about the company’s other clients, Amazon simply declined to make a comment on the company’s full offerings and the overall scope of United States law enforcement agencies making use of Amazon Rekognition system.

As far as privacy advocates are concerned, any tiny amount of racial bias in the algorithms is actually too much.

These issues are especially important now since the current Amazon Rekognition systems have demonstrated a clear racial bias.

The executive director of Center on Privacy and Technology at Georgetown University, Alvaro Bedoya, recently said that to full realize the ramifications of such faulty image recognition systems, the community should use imagine a speed camera beside a given road that wrongly reported that drivers with darker skin tones were actually speeding at much higher rates that drivers with lighter skin tones.

After that, according to Alvaro, the community should imagine that the law enforcement agency actually knows about all such racial biases.

And everyone else surrounding the image recognition technology also knows about it but law enforcement agencies just keep on using it no matter what.

He further added that the community would not find such a racial bias as acceptable in any other kind of settings so why should the community somehow find it completely acceptable here.

On the other hand, Amazon has taken some issues with regards to the parameters of the aforementioned study.

The company has noted that the American Civil Liberties Union had actually made use of a confidence threshold of 80 percent.

What does that mean?

The confidence threshold means the likelihood that Amazon’s Rekognition service found a specific match for an image.

Users of the Amazon Rekognition offering have the option of adjusting that confidence threshold according to their own needs and desired degree of accuracy.

Amazon said in a recent official statement that while it considered the 80 percent confidence threshold level as acceptable for images that contained chairs, hot dogs, animal and/or other types of use cases related to social media, such a low confidence threshold level would not be sufficient appropriate if one wanted to use the same confidence threshold level to identify individuals while remaining within the bounds of a reasonable degree of certainty.

Amazon also mentioned that whenever a customer wanted to use facial recognition capabilities for activities that law enforcement agencies performed, the company recommended customers and users to set the confidence threshold level of, at the very least, 95 percent or even higher.

The company has also emphasized that it is still working very closely with all its partners.

What is still unclear is the form that the company’ guidance takes.

It is also not clear whether the company’s customers and/or law enforcement agencies actually follow the company’s guidance on how to best use the company’s products.

Of course, ultimately, the obligation is actually on the customers.

The customers also include law enforcement agencies.

All of them have to make sure that they put in some work to implement the adjustments that Amazon recommends.

Recent media reports have revealed that a spokesperson for the Orlando Police Department did not have an idea on how the city had actually gone about calibrating the Amazon Rekognition system for its own pilot program.

The American Civil Liberties Union has already countered Amazon’s position by saying that the default setting for the confidence threshold level in the case of Amazon Rekognition system stood at 80 percent.

Moreover, Joshua Kroll, a computer scientist at UC Berkeley, verified the American Civil Liberties Union’s findings independently.

He noted that if anything, the face-forward and professionally photographed congressional portraits that researchers used in their study represented softball practice compared to what Amazon Rekognition would have to encounter and then handle out in the real world.

Kroll also said that as far as he could tell, this was the most convenient and easiest possible use case for Amazon Rekognition technology to actually work on.

Furthermore, he said, that while his team had not tested Amazon Rekognition, he would have no problems in naturally anticipating that Amazon Rekognition would actually perform even worse out in field environment.

And the reason for that is simple.

Kroll believes that the outside environment is where the image recognition technology would not see the faces of people straight on.

Moreover, it may have to manage imperfect lightning.


Any image recognition technology would also have to deal with a decent amount of occlusion.

Additionally, some people may have something on their heads and faces.

They may also be carrying things or even wearing them.

All of these factors will get in the way of Amazon Rekognition trying to analyze faces.

As expected, Amazon has actually downplayed the potentially serious implications of errors associated with facial recognition systems.

One company statement read that out in the real world in real physical scenarios, facial recognition technology products such as Amazon Rekognition are usually employed exclusively to help out in narrowing the given field and allowing humans to review the results expeditiously.

Humans are also expected to consider all the options using their own judgment.

However, that statement from Amazon actually elides the very serious and real consequences that people, whom the facial recognition technology wrongly identities, may have to feel and suffer.

Bedoya recently mentioned that at a bare minimum, these people would have to remain content with the fact that law enforcement agencies were going to investigate them.

And it would be hard for anyone to find a person who likes law enforcement agencies investigating him or her.

According to Bedoya, the idea that some have that there is no real cost to facial recognition technology misidentifying someone only defied logic and nothing else.

But so is the notion and if a company or a customer has a human backstop then that provides a reasonable amount of check on the facial recognition system.

The director of the Domestic Surveillance Project at the Electronic Privacy Information Center, Jeramie Scott recently said that often with technology, the community starts to heavily rely on what it can do.

They start to think that technology is infallible.

Of course, that is not the case.

To take an example, back in the year 2009, law enforcement agencies in San Francisco handcuffed a woman.

The police also held the woman at gunpoint.

Why did they do that?

They did it because their license-plate reader misidentified the woman’s car.

Now, to avoid all of that nastiness and confrontation, all that law enforcement officers had to do was to simply look at the woman’s plate themselves.

They could have also noticed the make of the car a bit more closely.

If they had done that, they would have realized that the color, model and make of the car did not match.

But what did they do instead?

They trusted their plate reader, a machine.

Even if we assumed for a minute that we lived in a world where facial recognition and analysis technology worked perfect, it would still raise concerns if companies just handed the technology to law enforcement agencies.


According to Scott, facial recognition and analysis technology would destroy the ability or ordinary citizens to remain anonymous.

Scott also added that such a technology would enhance the ability of various law enforcement agencies to misuse it and surveil even those individuals that they did not suspect of crimes.

Furthermore, according to Scott, facial recognition and analysis technology could chill activities and rights protected under the First Amendment.

He also pointed out what they wanted to try and avoid here was nothing but mass surveillance.

While the American Civil Liberties Union study covered the well-trodden ground when it comes to pointing out facial recognition and analysis technology’s faults, the study has a much better chance of making a real impact.


According to Bedoya, the study’s most powerful aspect is that it finds success in making it personal for United States Congress members.

Members of the United States Congressional Black Caucus had actually written a letter previously to the technology giant Amazon.

In the letter, the Black Caucus expressed its concerns about the facial recognition technology.

However, the American Civil Liberties Union appears to have found much more success in attracting a good amount of attention from several additional United States lawmakers.

There is no doubt about the fact that the real trick here for everyone involved would be to turn all that concern into concrete action.


At a minimum, according to privacy advocates, law enforcement agencies who have bought into the use of facial recognition and analysis technology should have heavy restrictions put on them until the technology itself reaches a point where researchers and engineers have corrected its racial bias and have assured its accuracy.

Some argue that even after that, US lawmakers need to make sure that they limit and clearly define the scope of facial recognition and analysis technology.

Unless and until that takes place on a wide scale, we feel that the time has come to not pump the brakes.

Instead, we should slam down on those brakes with both of our feet.

Bedoya also pointed out that in the 21st century policing, society should find it unacceptable that a technology is present which researchers have shown to vary in a significant manner across groups of people based on nothing but the color and skin tone of their skin.

That in and of itself is a huge problem.

But technology companies are moving quick.

Many have already jumped at other markets.

Recent reports have suggested that now schools have the option of purchasing facial recognition and analysis technology without paying anything.

Of course, the real question is should they make use of facial recognition and analysis technology or not?

The vast majority of parents living in the United States of America have this concern about their kid’s safety at school.

Many have thought long and hard about how to keep their kids safe and away from all the shootings that have happened in the last couple of years.

One of those parents is Rob Glaser.

More specifically though, Rob has thought long and hard about what he, as an individual, can do apart from getting into endless and nasty arguments over, according to Rob, the G word.

We don’t mean to say that Glaser doesn’t like gun control.

What we mean is that Glaser is a man of action.

To read more about what he plans to do to make sure kids are safe when they go to school, click here.




Zohair A. Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.
Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.