Ethics in AI: Facial Recognition Bans
In June 2020, IBM, Amazon, and Microsoft announced pauses on selling facial recognition to police. This wasn’t a technical decision—it was ethical.
What Happened
IBM: Exit the facial recognition market entirely.
Amazon: One-year moratorium on police use of Rekognition.
Microsoft: Won’t sell to police without federal regulation.
The timing wasn’t coincidental. George Floyd’s murder and subsequent protests forced tech companies to confront how their tools are used.
Why Facial Recognition is Problematic
Accuracy Disparities
Studies consistently show higher error rates for:
- Darker skin tones
- Women
- Younger faces
The NIST 2019 study found some algorithms had 10-100x higher false positive rates for Black faces compared to white faces.
Surveillance Mass Effect
One CCTV camera → inconvenient
10,000 cameras with facial recognition → mass surveillance
The technology changes the power dynamic fundamentally.
Due Process Concerns
Facial recognition is used to identify, then arrest. But:
- How do you challenge an algorithm in court?
- What’s the false positive rate? (Often not disclosed)
- Who reviewed the match before arrest?
Mission Creep
Technology deployed for “serious crimes” expands:
- Terrorism → theft → jaywalking
- “Find criminal” → “Track everyone”
Real-World Harms
Wrongful Arrests
Robert Williams, Detroit: Arrested based on faulty facial recognition match. Spent 30 hours in custody. The algorithm was wrong.
Protest Surveillance
Hong Kong protesters wore masks. Portland protesters were identified. The chilling effect on free assembly is real.
Discrimination Amplification
If historical policing was biased, training data is biased, model is biased, deployment amplifies bias.
The Technical Problem
Training Data Bias
# If training data looks like:
training_data = {
"white_male": 60_000,
"white_female": 20_000,
"black_male": 5_000,
"black_female": 2_000,
}
# Model will be better at white_male
Benchmark Gaming
"99.9% accuracy on LFW benchmark!"
# But LFW is:
# - Mostly celebrities
# - Mostly white
# - Controlled conditions
Real-world performance is worse.
What Developers Should Consider
Before Building
- Who will use this? Law enforcement? HR? Marketing?
- Who is affected? Consenting users? Random citizens?
- What’s the failure mode? Annoyance? Arrest?
- Is there meaningful consent? Can affected people opt out?
During Development
- Diverse training data: Balanced representation
- Fair evaluation: Test across demographics
- Failure transparency: Publish error rates by group
- Red teaming: Adversarial testing
Before Deployment
- Use case review: What are customers actually doing?
- Customer vetting: Do sales contracts restrict use?
- Audit mechanism: How do you know it’s being used responsibly?
Regulatory Landscape
Bans
- San Francisco: Banned government use (2019)
- Portland: Banned public and private use (2020)
- EU: Proposed ban in public spaces (2021)
Proposed Federal Legislation
Multiple bills proposed but not passed:
- Facial Recognition and Biometric Technology Moratorium Act
- Justice in Policing Act
Industry Self-Regulation
Partnership on AI, Algorithm Justice League, and internal ethics boards at major companies.
The Broader Question
Facial recognition is one technology. The pattern repeats:
- Predictive policing
- Hiring algorithms
- Credit scoring
- Content moderation
Questions to ask:
- Does this amplify existing inequities?
- Who benefits? Who’s harmed?
- Is there meaningful oversight?
- What’s the alternative?
What Changed After June 2020?
Modest progress:
- More companies have ethics review processes
- Some bans at city/state level
- Academic attention increased
Still lacking:
- Federal regulation
- Consistent standards
- Meaningful transparency
For AI Practitioners
You have choices:
- Which projects you work on
- How you build systems
- What you push back on
“I was just following the spec” isn’t acceptable when the spec causes harm.
Final Thoughts
The 2020 moratoriums were significant but insufficient. Companies paused sales to police—but facial recognition is everywhere else.
The technology isn’t going away. The question is: under what constraints? With what oversight? For whose benefit?
These aren’t just policy questions. They’re engineering questions. Build thoughtfully.
With great power comes great responsibility.