Who Shapes the AI-First World? Rules, Risks, and Responsibility
AI is no longer a niche subject; it has become central to product design and business strategy. Governments are moving quickly, and companies are paying close attention. The goal is to keep the people protected while enabling growth. Striking the right balance is the real challenge.
Artificial Intelligence Act
Europe has taken the lead with its risk-based AI Act. This legislation prohibits harmful applications and enforces strict checks for high-risk systems. The law also mandates audits and continuous lifecycle reviews to safeguard rights and safety.
Global standards are evolving in response. The OECD has revised its AI principles, emphasizing trustworthy systems that respect human rights. These principles highlight transparency, accountability, and human oversight.
India is charting its own course with new digital laws and the Digital Personal Data Protection Act, strengthening data rules. Policymakers are still determining how AI fits into this framework, while regulators and businesses engage in active dialogue.
The Challenge of Compliance
A key concern is that complex compliance requirements tend to benefit larger firms. Sanjay Koppikar, CPO of EvoluteIQ, noted, “If you make the compliances complex, only the bigger companies will be able to follow it.” This could lead to an “AI oligarchy,” mirroring past patterns of concentrated power.
The issue stems from the demands of training and running advanced models, which require massive compute power, specialized infrastructure, and large budgets. Cloud providers and chip makers control much of this supply, driving up costs for smaller players, thus limiting competition, reducing experimentation, and heightening systemic risk.
Accountability in AI Systems
When AI systems fail, accountability becomes a significant challenge. Who is responsible? Data scientists, product leaders, compliance teams, or executives? Many organizations are still figuring this out. Koppikar describes his team’s approach using three layers: technical accountability through audit trails, operational accountability through human-in-the-loop, and governance accountability with clear escalation pathways.
For high-stakes outcomes, human oversight must remain essential. The human-in-the-loop concept is more than a buzzword; it serves as a safeguard, allowing time to catch biases, mistakes, and risks while maintaining clear responsibility.
Guardrails, Governance, and the Future of AI Policy
Regulators increasingly expect human oversight in sensitive domains such as healthcare, finance, and public safety. However, challenges remain in standards and interoperability. The industry still lacks common protocols for agent communication, provenance tracking, or audit formats, slowing independent audits and cross-vendor checks.
Koppikar highlights the need for shared frameworks and collaboration among vendors, civil society, and regulators. He rejects single-entity control, advocating for a multi-stakeholder model: “Government sets the standards, independent technical bodies conduct audits, and civil society provides oversight.” This mix reduces capture and increases trust.
Geopolitical Implications
Geopolitics adds another layer to the AI landscape. Restrictions on chips and models are increasingly used as tools of statecraft. Koppikar warns that control for strategic advantage differs from safety-driven regulation, which risks fracturing markets and fueling secretive AI races instead of making systems safer.
Practical Policy Recommendations
What constitutes practical policy? Firstly, compliance should be built into product design – treating rules as design constraints rather than afterthoughts. Secondly, compliance costs for startups should be lowered through sandboxes and standards. Thirdly, human oversight for critical outputs must be mandated, and lastly, monitoring compute markets is necessary to prevent concentration among a few players.
Koppikar warns, “The moment you say government control, you’re actually making the innovation die.” Too much control can stifle progress, while a lack of control poses its own risks. The real challenge for policymakers is clear yet difficult: protect people while keeping the door open for innovation.
“We now live in an AI-first world,” he emphasizes. The policies written today will determine who shapes that world. If regulations favor only the established, the future may be concentrated in few hands. If they are smart, flexible, and collaborative, the future could be diverse and inclusive.
The stakes are high, and the debate is worth having.