AI Governance Built on Real-World Evidence from the Global South

Evidence
We collect real-world AI deployment evidence across the Global South.
Datasets | Real-World Incidents | Fairness gaps | Sustainability claims | Policy trade-offs
Insights
We convert evidence into cross-country governance insights.
Sector risk maps | Evidence dashboards | Comparative country briefs
Frameworks
We translate insights into deployable AI governance frameworks.
Model audit toolkits | Fairness passports | Risk scoring | Policy design templates

Global South AI Governance

Start Exploring

Subscribe to our newsletter

No spam. Occasional updates when new evidence, briefs, or tools are published.

This platform serves as a living evidence repository for AI governance across the Global South — combining real incident analysis, regulatory interpretation, field research, and practical implementation tools.

Upcoming releases expand this evidence layer with structured evidence packs, practitioner checklists, audit playbooks, and policy briefing material designed for real-world institutional deployment.

Get Started

Browse our articles by category, author, or tag

Latest Articles

View All
How AI Bias Locked Out Millions of Job Seekers (A Case Study on Mobley v. Workday)

How AI Bias Locked Out Millions of Job Seekers (A Case Study on Mobley v. Workday)

The Mobley v. Workday lawsuit represents a landmark shift in legal accountability, establishing that AI software vendors can be held liable as agents for discriminatory hiring practices that exclude qualified candidates. The case highlights how black box algorithms can systematically penalize individuals based on race, age, and disability through biased training data and the use of neutral proxies. This legal evolution signals a broader mandate for Accountability by Design, requiring employers and developers to ensure transparency and human oversight in automated recruiting systems

Core Team
AI Hiring Bias Exposed: How SiriusXM’s Algorithm Rejected Qualified Candidates

AI Hiring Bias Exposed: How SiriusXM’s Algorithm Rejected Qualified Candidates

This article examines the landmark Harper v. Sirius XM Radio, LLC lawsuit, highlighting how automated hiring systems can institutionalize racial discrimination through proxy biases like zip codes and educational institutions. By analyzing the technical and systemic failures of the iCIMS implementation, it offers a critical roadmap for corporate AI governance to prevent qualified talent from becoming algorithmically-invisible while navigating an era of increasing regulatory scrutiny

Core Team
AI Hiring Gone Wrong: How Eightfold’s Social Media Profiling Sparked a Fairness and Consent Crisis

AI Hiring Gone Wrong: How Eightfold’s Social Media Profiling Sparked a Fairness and Consent Crisis

A 2026 lawsuit against Eightfold AI reveals how job applicants may have been secretly scored using social media and online data, without consent or transparency. The case exposes how AI hiring systems can replicate bias, exclude candidates with thin digital footprints, and create massive legal and fairness risks. What happens when invisible algorithms decide who gets a chance?

Core Team