Shaping an Equitable AI Future Through Collaboration and Trust

Why Responsible AI Must Move Beyond Dominance and Start Sharing Value

The AI economy is often discussed through the language of scale, speed, and supremacy. But beneath the race for models, chips, and market share lies a harder question: who gets to shape this future, and who merely inherits it? For countries like India, that question is no longer abstract. It is tied to language, culture, trust, data sovereignty, and the right to build AI systems that reflect local realities rather than imported assumptions.

In a conversation with Dataquest, Reggie Townsend, Vice President of AI Ethics, Governance and Social Impact at SAS, lays out why responsible AI must go beyond compliance rhetoric and become a more practical framework for fairness, accountability, and shared value.

Recode the AI Economy

Most conversations about the AI economy assume that dominance equals advantage. To “recode the AI economy” means recognizing that AI dominance without partnership is a depreciating asset. Countries that inherit a foreign AI stack without having helped shape it have every incentive to seek alternatives to avoid having their needs, languages, and values overlooked. The current AI business model has many nations on the receiving end of decisions made elsewhere, and that dependency is neither sustainable nor just. True AI progress needs to be shaped by global talent, shared research ecosystems, and cross-border collaboration.

The strength of the US AI sector reflects a lot of global interdependence. More than 60% of the top AI-related startups on the Forbes AI 2025 list were founded or co-founded by immigrants, and 70% of full-time graduate students in AI-related fields are not from the US. This reality indicates that durable leadership is emerging from multiple centers, shaped by local priorities but strengthened globally.

Moving from Extraction to Value Sharing

Real value sharing happens when local innovators and citizens benefit in tangible ways: lower bandwidth prices, access to data commons, and the ability to use AI tools grounded in their own languages and cultural norms. In India, this means AI systems that work in Hindi, Tamil, Bengali, and the hundreds of other languages that enrich the subcontinent’s linguistic landscape. It requires community-level access, not just enterprise-level access. Technology does not exist in a vacuum; it affects us all in unexpected ways.

The “Drone Didi” project in India exemplifies this involvement of people previously outside the tech economy. Women participating in this project are adding value to their lives and contributing to a larger cause. At SAS, similar initiatives like Data for Good empower South African micro-farmers, combining generations of knowledge with market information, thus expanding their agency.

The Power Levers: Data, Compute, or Rulemaking?

There is no single power lever that decides who wins; all coexist. Data had its time, compute is currently winning through the proliferation of GPUs, and rulemaking faces an uphill battle against the pace of AI development. However, this is a dance where each must lead at different times. Now is the time for rulemaking to take the lead without compromising data and compute. The rules we devise should govern both opportunities and harms, addressing new capabilities emerging from data and compute.

Cultural Blind Spots in Responsible Tech

The biggest blind spot in global “responsible tech” debates is often cultural. The largest AI models originate from the US and China, neither of which fully shares the cultural values of India. India is not a monolith; it has diverse cultural dimensions and social nuances that many big providers overlook. Tools like the Global Index on Responsible AI can benchmark progress on how well AI systems reflect local priorities and linguistic diversity.

Building Trust in Automation

For organizations using automation in banking, healthcare, or government services, five basics must be in place to ensure trust in AI outcomes:

  • Oversight: Ensure visibility, transparency, and accountability within the organization.
  • Controls: Compliance with regulatory requirements and internal policies is essential.
  • Operations: Understanding where AI fits in the workflow and ensuring human oversight is vital.
  • Culture: AI should augment, not replace, human roles, particularly in sensitive fields like healthcare.
  • Redress pathways: Establishing mechanisms for grievances and compensation is crucial for maintaining trust.

Innovation vs. Harm

The balance between innovation and harm is complicated. While innovation aims to enable human values, it can unintentionally cause harm. Structures and rules must be established to level the playing field and create a commons for coexistence without extensive harm. One area of concern is the use of probabilistic AI methods in high-risk scenarios such as healthcare and finance, where human judgment must remain integral.

Common Mistakes in Responsible AI Implementation

Many leaders view AI primarily as a cost-reduction opportunity, which can create distrust in the technology and its creators. This short-term thinking may lead to long-term profitability issues. Cultural relevance in AI systems, frameworks like The Quad, and emerging regulations can help build trust, ensuring that everyone benefits from AI.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...