Ethics in AI: Facial Recognition Bans

ai ethics machine-learning

In June 2020, IBM, Amazon, and Microsoft announced pauses on selling facial recognition to police. This wasn’t a technical decision—it was ethical.

What Happened

IBM: Exit the facial recognition market entirely.

Amazon: One-year moratorium on police use of Rekognition.

Microsoft: Won’t sell to police without federal regulation.

The timing wasn’t coincidental. George Floyd’s murder and subsequent protests forced tech companies to confront how their tools are used.

Why Facial Recognition is Problematic

Accuracy Disparities

Studies consistently show higher error rates for:

The NIST 2019 study found some algorithms had 10-100x higher false positive rates for Black faces compared to white faces.

Surveillance Mass Effect

One CCTV camera → inconvenient
10,000 cameras with facial recognition → mass surveillance

The technology changes the power dynamic fundamentally.

Due Process Concerns

Facial recognition is used to identify, then arrest. But:

Mission Creep

Technology deployed for “serious crimes” expands:

Real-World Harms

Wrongful Arrests

Robert Williams, Detroit: Arrested based on faulty facial recognition match. Spent 30 hours in custody. The algorithm was wrong.

Protest Surveillance

Hong Kong protesters wore masks. Portland protesters were identified. The chilling effect on free assembly is real.

Discrimination Amplification

If historical policing was biased, training data is biased, model is biased, deployment amplifies bias.

The Technical Problem

Training Data Bias

# If training data looks like:
training_data = {
    "white_male": 60_000,
    "white_female": 20_000,
    "black_male": 5_000,
    "black_female": 2_000,
}

# Model will be better at white_male

Benchmark Gaming

"99.9% accuracy on LFW benchmark!"
# But LFW is:
# - Mostly celebrities
# - Mostly white
# - Controlled conditions

Real-world performance is worse.

What Developers Should Consider

Before Building

  1. Who will use this? Law enforcement? HR? Marketing?
  2. Who is affected? Consenting users? Random citizens?
  3. What’s the failure mode? Annoyance? Arrest?
  4. Is there meaningful consent? Can affected people opt out?

During Development

  1. Diverse training data: Balanced representation
  2. Fair evaluation: Test across demographics
  3. Failure transparency: Publish error rates by group
  4. Red teaming: Adversarial testing

Before Deployment

  1. Use case review: What are customers actually doing?
  2. Customer vetting: Do sales contracts restrict use?
  3. Audit mechanism: How do you know it’s being used responsibly?

Regulatory Landscape

Bans

Proposed Federal Legislation

Multiple bills proposed but not passed:

Industry Self-Regulation

Partnership on AI, Algorithm Justice League, and internal ethics boards at major companies.

The Broader Question

Facial recognition is one technology. The pattern repeats:

Questions to ask:

  1. Does this amplify existing inequities?
  2. Who benefits? Who’s harmed?
  3. Is there meaningful oversight?
  4. What’s the alternative?

What Changed After June 2020?

Modest progress:

Still lacking:

For AI Practitioners

You have choices:

“I was just following the spec” isn’t acceptable when the spec causes harm.

Final Thoughts

The 2020 moratoriums were significant but insufficient. Companies paused sales to police—but facial recognition is everywhere else.

The technology isn’t going away. The question is: under what constraints? With what oversight? For whose benefit?

These aren’t just policy questions. They’re engineering questions. Build thoughtfully.


With great power comes great responsibility.

All posts