Article Series

Browse our collection of multi-part article series on various topics

AI Fairness 101 - Real-World Incidents 12 parts

  1. 1

    When an Algorithm Broke Thousands of Families: The Netherlands Child Welfare Scandal

    How a design-phase failure in the Dutch childcare fraud algorithm created one of the worst AI governance disasters in Europe — and what the Global South must learn from it.

  2. 2

    Access Denied: How India's Digital 'Cure-All' Became a Real-World Fairness Crisis

    How Aadhaar’s promise of digital inclusion turned into one of the largest algorithmic exclusion crises in the world.

  3. 3

    The Golden Touch of Ruin: How Michigan’s MiDAS Algorithm Falsely Accused 40,000 People of Fraud

    A deep dive into Michigan’s MiDAS unemployment fraud algorithm — and how design-phase failures, automation bias, and the removal of human oversight turned efficiency into injustice.

  4. 4

    The COMPAS Algorithm Scandal: When AI Decides Who Goes to Jail ⚖️

    As AI enters courts and welfare systems worldwide, the COMPAS debate reveals a critical lesson: fairness depends on context, and exporting models without reform risks scaling inequality.

  5. 5

    The Optum Healthcare Algorithm Bias Against Black Patients (2019)

    A 2019 case study of the Optum healthcare algorithm showing how proxy bias led to racial disparities and under-served Black patients.

  6. 6

    When Algorithms Decide Who Recovers: The UnitedHealth nH Predict Case

    In 2023, a lawsuit revealed how UnitedHealth used an AI system to determine when elderly patients should stop receiving care. The nH Predict case highlights how cost-driven algorithms can override clinical judgment and introduce systemic bias in healthcare decisions. This case raises critical questions for policymakers especially in the Global South about the risks of scaling AI without adequate oversight.

  7. 7

    The Algorithmic Gender Bias — Lessons from the Amazon AI Hiring Failure

    Amazon built an AI to find the best candidates. It ended up filtering out women. Amazon’s hiring tool is a clear example of how gender bias can be embedded and amplified through algorithms. In the Global South, the risks are even higher.

  8. 8

    AI Hiring Gone Wrong: How Eightfold’s Social Media Profiling Sparked a Fairness and Consent Crisis

    A 2026 lawsuit against Eightfold AI reveals how job applicants may have been secretly scored using social media and online data, without consent or transparency. The case exposes how AI hiring systems can replicate bias, exclude candidates with thin digital footprints, and create massive legal and fairness risks. What happens when invisible algorithms decide who gets a chance?

  9. 9

    AI Hiring Bias Exposed: How SiriusXM’s Algorithm Rejected Qualified Candidates

    This article examines the landmark Harper v. Sirius XM Radio, LLC lawsuit, highlighting how automated hiring systems can institutionalize racial discrimination through proxy biases like zip codes and educational institutions. By analyzing the technical and systemic failures of the iCIMS implementation, it offers a critical roadmap for corporate AI governance to prevent qualified talent from becoming algorithmically-invisible while navigating an era of increasing regulatory scrutiny

  10. 10

    How AI Bias Locked Out Millions of Job Seekers (A Case Study on Mobley v. Workday)

    The Mobley v. Workday lawsuit represents a landmark shift in legal accountability, establishing that AI software vendors can be held liable as agents for discriminatory hiring practices that exclude qualified candidates. The case highlights how black box algorithms can systematically penalize individuals based on race, age, and disability through biased training data and the use of neutral proxies. This legal evolution signals a broader mandate for Accountability by Design, requiring employers and developers to ensure transparency and human oversight in automated recruiting systems

  11. 11

    Kenya's Digital ID Crossroads: How Huduma Namba and Maisha Namba Risk Exclusion by Design

    How Kenya's shift from Huduma Namba to Maisha Namba exposes the fairness, legal, and infrastructure failures that can turn digital identity into exclusion by design.

  12. 12

    The Ghost in the Machine: Uganda's Ndaga Muntu and the High Cost of Digital Identity

    How Uganda's Ndaga Muntu national ID system exposes the human cost of digital identity when legal status, public services, finance, education, and land rights depend on fragile registry infrastructure.