Beyond America's AI Action Plan: A Global South Response on Fairness

America's AI Action Plan's redefinition of fairness, by removing Diversity, Equity, and Inclusivity (DEI), risks hard-coding inequities for the Global South, necessitating a proactive response to define its own culturally and contextually relevant AI fairness standards.

Beyond America's AI Action Plan: A Global South Response on Fairness

🎥 Explained: Global South fairness response on US AI Action Plan


Abstract

The recently released America’s AI Action Plan represents a major shift in how technology is managed worldwide, changing the definition of AI fairness to “ideological neutrality” and calling for the removal of Diversity, Equity, and Inclusion (DEI) principles from the NIST AI Risk Management Framework. This poses a direct threat to the Global South through “standards capture”, in which a narrow, culturally specific definition of fairness risks being exported as a global default. This approach completely overlooks the region’s unique “risk surface”, which is shaped by structural realities like caste, extreme language fragmentation, and informal economies.

In response, the Global South must collectively publish a “Global South AI Fairness Doctrine” grounded in culturally plural, legally robust, and technically implementable principles.

1. What changed in America’s AI action plan

The global conversation about how to govern Artificial Intelligence (AI) has reached a turning point. A recent U.S. policy directive has made things very unclear in the world of AI governance. It has changed the definition of a core principle: fairness. The recently released America’s AI Action Plan, along with an Executive Order on “Preventing Woke AI in the Federal Government”, shifts the focus from making sure that different demographic groups get fair results to making sure that there is ideological balance.

The Three Core Directives

This transition is anchored in three directives that dismantle empirical fairness frameworks:

  • Removal of DEI from the NIST Framework
    The National Institute of Standards and Technology (NIST) is directed to strip diversity, equity, and inclusion (DEI) from its AI Risk Management Framework (RMF), contradicting its own long-standing guidance that defines bias as systemic, statistical, and human-cognitive.

  • New “Truth-Seeking” Procurement Standards
    Federal agencies must prioritize AI systems deemed objective and free from ideological bias. These subjective tests replace earlier metrics focused on verifiable, equitable outcomes.

  • Redefinition of Fairness
    Fairness shifts from demographic equity (equitable outcomes across groups) to viewpoint parity (balanced ideological representation).

This represents a dangerous transition from empirical, math-based fairness to subjective, political fairness. By prioritizing viewpoint parity over measurable outcomes, regulators create a strategic blind spot. Fairness becomes a debate between opinions rather than a technical requirement to mitigate real-world harm.

Proponents claim this shift ensures neutrality. Empirical evidence shows the opposite. When stripped of demographic safeguards, so-called “objective” systems routinely default to reinforcing historical inequality.


2. The Mirage of Neutrality: Why “Objective” Systems Fail

In AI governance, neutrality is often a mirage. Algorithms trained on historical data do not escape bias by ignoring context. They automate it.

To ignore social structure is not neutrality. It is granting historical inequality a permanent license.

Case Studies: Objective Inputs, Discriminatory Outcomes

ContextOutcome
U.S. Healthcare (Optum Algorithm)Past healthcare spending was used as a proxy for need, ranking equally sick Black patients as lower risk because less had historically been spent on their care.
Criminal Justice (COMPAS Scores)“Race-blind” inputs such as ZIP codes and arrest history acted as statistical proxies for race, producing significantly higher false-positive rates for Black defendants.

The Executive Order’s justification also talks mostly about representational harms, like when image models change the race or sex of historical figures or refuse to “celebrate white achievements”, or when chatbots prioritise pronoun rules over hypothetical catastrophes.

These are valid worries, but they are less important than the much more serious problem of allocative harms, which happen when AI systems use biased historical data to make economic and social inequalities worse, like in credit scoring, hiring filters, or insurance pricing

This creates legal friction. A model can be ideologically neutral and still illegal under laws such as the Equal Credit Opportunity Act, Fair Lending Laws or Civil Rights, Title VII, which prohibit disparate impact regardless of intent.

If such neutrality-focused frameworks fail in data-rich Western contexts, their deployment in the data-scarce, multi-dimensional environments of the Global South carries exponentially greater risk.


3. The Global South’s Multi-Dimensional Risk Surface

The risk surface in the global south is far more complex and multidimensional – defined by realities like caste, tribe, religion, gender, extreme language fragmentation, and income precarity. Informal economies, low-resource languages, and historical inequities amplify the potential for bias. AI models trained on skewed histories or in informal economies risk reproducing structural exclusions in credit, employment, and public services.

Fairness cannot be reduced to ideological balance in societies shaped by caste, informality, and linguistic diversity. A procurement rule rewarding “ideological neutrality” but dropping demographic equity checks will simply certify that a system doesn’t appear to favour one political narrative over another, ignoring whether false loan denials cluster around Dalit borrowers in India, whether speech-to-text models fail to understand speakers of low-resource languages like Bundeli or Chhattisgarhi, or whether admissions algorithms disadvantage first-generation students from specific rural districts.

Western neutrality frameworks are structurally unequipped for this complexity. This approach fundamentally conflicts with emerging local frameworks in the global south, like India’s RBI FREE-AI roadmap, which specifically calls for fairness-by-design to fix these exact problems.

This produces a silent downgrade of global safety. Because NIST frameworks feed into ISO drafts and procurement rules, removing DEI allows vendors to claim compliance while ignoring local harm.

A system certified as “neutral” in Washington may still systematically deny credit to Dalit borrowers or exclude rural populations simply because those checks no longer exist in the compliance baseline.

As the U.S. retreats from these realities, the Global South is emerging as the unexpected custodian of responsible AI norms.


4. The Rise of Sovereign Frameworks: A Developing Consensus

Global South nations are no longer passive standards-takers. They are actively aligning AI with sustainable development, constitutional law, and historical justice.

India: RBI FREE-AI Sutras

India’s 2025 RBI FREE-AI framework mandates fairness-by-design and is grounded in constitutional protections against discrimination by caste, religion, and sex. The U.S. neutrality model is legally incompatible with Indian law.

South Africa: Redressing Historical Inequity

South Africa’s National AI Policy Framework embeds bias mitigation to prevent the replication of apartheid-era exclusions.

ASEAN: “Gotong Royong”

ASEAN’s cooperative model focuses on harmonizing digital ecosystems and reducing linguistic and infrastructure divides.

Latin America: Human-Centered AI

Brazil and Chile require mandatory fairness impact assessments, centering human rights rather than ideological balance.

By aligning with OECD AI Principles and EU AI Act protections, these frameworks defend the existing international consensus. While the U.S. narrows fairness, the Global South expands it to remain legally and operationally viable.

To resist standards capture, these national efforts must now federate into a collective strategic bloc.


5. Blueprint for a Global South AI Fairness Doctrine

Fragmented policies cannot counter Silicon Valley’s gravity. The Global South needs a federated doctrine that converts shared values into market power.

Five Pillars

  1. Adopt Shared Fairness Taxonomies:
    Countries should co-create a Global South Fairness Lexicon that captures local attributes—such as caste, tribe, dialect, and rurality—so that they can be used consistently in regulations and audits across jurisdictions.

  2. Pooled Evaluation Infrastructure
    Launch a shared, open-source “Global South Benchmark Pack” featuring datasets for more than 100 low-resource languages, sector-specific tests, and subgroup performance dashboards to enable joint evaluation of global AI models.

  3. Standards Diplomacy Bloc
    Create a coordinated diplomatic bloc that votes together at ISO/IEC, ITU, and the G20. This will stop “neutrality-only” baselines from becoming the global default.

  4. Compute & Capacity Cooperation
    Build a BRICS+AU compute cooperative with reserved capacity for fairness research. This will reduce reliance on foreign infrastructure and give countries more freedom to innovate.

  5. Shared Procurement Clauses
    Make sure that all public-sector AI bids require vendors to supply subgroup error-rate reports and clear “policy cards” that show how fairness-related design decisions were made.

These pillars provide enforcement teeth. Vendors must either negotiate on terms aligned with local law and demographic safety or forgo access to the world’s fastest-growing digital markets.


6. Conclusion: From Standards-Takers to Standards-Makers

The Global South faces a stark choice: replicate a flawed neutrality doctrine or chart a sovereign path.

Neutrality in the face of structural exclusion is not neutral. It is a policy choice with generational consequences.

Frameworks emerging across India, Africa, ASEAN, and Latin America show that the Global South is now the true steward of responsible AI. The opportunity is historic.

The Global South must act collectively, urgently, and decisively—to become the world’s new standards-makers.

Global South response to fairness on US AI Action plan

📥 AI Policy Tracker — America’s AI Action Plan and the Global South Explainer Deck (PDF)

👉 Download the America’s AI Action Plan and Global South Explainer Deck (PDF)

💬 Join the Conversation

Have thoughts, experiences, or questions about AI fairness? Share your comments, discuss with global experts, and connect with the community:

👉 Reach out via the Contact page
📧 Write to us: [email protected]


🌍 Follow GlobalSouth.AI

Stay connected and join the conversation on AI governance, fairness, safety, and sustainability.

Subscribe to stay updated on new case studies, frameworks, and Global South perspectives on responsible AI.

Related Posts