Article Series
Browse our collection of multi-part article series on various topics
AI Case Studies from the Global South 1 part
AI Fairness 101 - Real-World Incidents 12 parts
- 1
When an Algorithm Broke Thousands of Families: The Netherlands Child Welfare Scandal
How a design-phase failure in the Dutch childcare fraud algorithm created one of the worst AI governance disasters in Europe — and what the Global South must learn from it.
- 2
Access Denied: How India's Digital 'Cure-All' Became a Real-World Fairness Crisis
How Aadhaar’s promise of digital inclusion turned into one of the largest algorithmic exclusion crises in the world.
- 3
The Golden Touch of Ruin: How Michigan’s MiDAS Algorithm Falsely Accused 40,000 People of Fraud
A deep dive into Michigan’s MiDAS unemployment fraud algorithm — and how design-phase failures, automation bias, and the removal of human oversight turned efficiency into injustice.
- 4
The COMPAS Algorithm Scandal: When AI Decides Who Goes to Jail ⚖️
As AI enters courts and welfare systems worldwide, the COMPAS debate reveals a critical lesson: fairness depends on context, and exporting models without reform risks scaling inequality.
- 5
The Optum Healthcare Algorithm Bias Against Black Patients (2019)
A 2019 case study of the Optum healthcare algorithm showing how proxy bias led to racial disparities and under-served Black patients.
- 6
When Algorithms Decide Who Recovers: The UnitedHealth nH Predict Case
In 2023, a lawsuit revealed how UnitedHealth used an AI system to determine when elderly patients should stop receiving care. The nH Predict case highlights how cost-driven algorithms can override clinical judgment and introduce systemic bias in healthcare decisions. This case raises critical questions for policymakers especially in the Global South about the risks of scaling AI without adequate oversight.
- 7
The Algorithmic Gender Bias — Lessons from the Amazon AI Hiring Failure
Amazon built an AI to find the best candidates. It ended up filtering out women. Amazon’s hiring tool is a clear example of how gender bias can be embedded and amplified through algorithms. In the Global South, the risks are even higher.
- 8
AI Hiring Gone Wrong: How Eightfold’s Social Media Profiling Sparked a Fairness and Consent Crisis
A 2026 lawsuit against Eightfold AI reveals how job applicants may have been secretly scored using social media and online data, without consent or transparency. The case exposes how AI hiring systems can replicate bias, exclude candidates with thin digital footprints, and create massive legal and fairness risks. What happens when invisible algorithms decide who gets a chance?
- 9
AI Hiring Bias Exposed: How SiriusXM’s Algorithm Rejected Qualified Candidates
This article examines the landmark Harper v. Sirius XM Radio, LLC lawsuit, highlighting how automated hiring systems can institutionalize racial discrimination through proxy biases like zip codes and educational institutions. By analyzing the technical and systemic failures of the iCIMS implementation, it offers a critical roadmap for corporate AI governance to prevent qualified talent from becoming algorithmically-invisible while navigating an era of increasing regulatory scrutiny
- 10
How AI Bias Locked Out Millions of Job Seekers (A Case Study on Mobley v. Workday)
The Mobley v. Workday lawsuit represents a landmark shift in legal accountability, establishing that AI software vendors can be held liable as agents for discriminatory hiring practices that exclude qualified candidates. The case highlights how black box algorithms can systematically penalize individuals based on race, age, and disability through biased training data and the use of neutral proxies. This legal evolution signals a broader mandate for Accountability by Design, requiring employers and developers to ensure transparency and human oversight in automated recruiting systems
- 11
Kenya's Digital ID Crossroads: How Huduma Namba and Maisha Namba Risk Exclusion by Design
How Kenya's shift from Huduma Namba to Maisha Namba exposes the fairness, legal, and infrastructure failures that can turn digital identity into exclusion by design.
- 12
The Ghost in the Machine: Uganda's Ndaga Muntu and the High Cost of Digital Identity
How Uganda's Ndaga Muntu national ID system exposes the human cost of digital identity when legal status, public services, finance, education, and land rights depend on fragile registry infrastructure.
AI in Workforce 4 parts
- 1
What Anthropic’s AI Jobs Study Misses in the Global South
Anthropic’s recent study on AI and jobs offers valuable insights into how artificial intelligence may affect labor markets in advanced economies. But the same framework does not fully capture how workforce disruption unfolds in the Global South—where informality, youth employment pressures, and service outsourcing shape labor market realities.
- 2
The Global South AI Labor Index: A Framework for Monitoring AI’s Workforce Impact
Artificial intelligence is beginning to reshape labor markets worldwide, yet most current studies measure its impact using indicators designed for advanced economies. In the Global South, workforce disruption is more likely to appear through rising informality, wage compression, underemployment, and shrinking entry-level opportunities rather than immediate job losses. This policy brief introduces the Global South AI Labor Index and an accompanying AI Labor Risk Dashboard to help governments detect early signals of AI-driven workforce transformation. Together, these tools provide a practical monitoring framework for managing the labor impacts of AI in developing economies.
- 3
AI and the Global South Workforce: The Next 10 Years
A Strategic Outlook for Labor, Development, and Artificial Intelligence
- 4
The Silent AI Shock in Workforce: An India Case Study
Why unemployment statistics will miss the real disruption of AI in the workforce in the Global South
AI Sustainability 101 1 part
AI-Policies 3 parts
- 1
Beyond America's AI Action Plan: A Global South Response on Fairness
America's AI Action Plan's redefinition of fairness, by removing Diversity, Equity, and Inclusivity (DEI), risks hard-coding inequities for the Global South, necessitating a proactive response to define its own culturally and contextually relevant AI fairness standards.
- 2
When Good Intentions Go Global: Why the EU AI Act Doesn’t Fit the Global South
Why the EU AI Act—designed for data-rich, institutionally mature European economies—breaks down when applied to the Global South.
- 3
Mind the Gap: Why the NIST AI Risk Framework Breaks Down in the Global South
The NIST AI Risk Management Framework (AI RMF) is increasingly treated as a global blueprint for “trustworthy AI.” But what happens when a framework designed for resource-rich Western institutions is applied to the Global South?