Regulating AI: Balancing Innovation and Safety

Time to Regulate AI

Artificial Intelligence (AI) is heralded as the most transformative technology of our era, influencing every facet of human activity. However, this immense power is accompanied by significant risks. The opacity of AI algorithms exacerbates these dangers, leading to instances where AI has caused substantial harm to society. Such harms can propagate rapidly, offering little opportunity for correction or redirection.

Risks of AI

The risks associated with AI extend to systemic instability, as demonstrated by AI-driven financial “flash crashes”. A notable example occurred on May 6, 2010, when the Dow Jones Industrial Average plummeted by over 1,000 points within ten minutes, erasing approximately $1 trillion in equity, although 70% of the losses were recovered by the end of the trading day. If left unregulated, these risks threaten to undermine trust in institutions and destabilize markets.

Perhaps more alarming is AI’s potential to inflict physical violence. In 2020, a UN report indicated that Turkish-made Kargu-2 drones, utilizing AI-based image recognition, may have engaged human targets without direct human oversight. This incident marked a possible first for autonomous lethal force.

Reports from Gaza in 2023–24 suggested that Israel employed an AI system named “Lavender” to automatically generate target lists for bombing campaigns, potentially resulting in unintended civilian casualties by lowering the threshold for strikes and challenging the morality, legality, and accountability of using violence against civilians.

Bias and Inequality

AI systems are prone to inheriting the biases of their creators and often reflect hidden biases within their training data. This replication of human prejudices can deepen societal inequalities. A case in point is the COMPAS algorithm used in U.S. courts to predict reoffending risks. This system disproportionately labeled Black defendants as “high risk” compared to white defendants, affecting bail and sentencing decisions.

Similarly, Amazon scrapped an AI hiring tool after discovering it discriminated against female applicants, and in 2019, Apple faced criticism when its credit card algorithm provided significantly lower credit limits to women compared to men with identical financial backgrounds.

Corporate Control and Geopolitical Risks

A handful of large U.S. corporations dominate AI resources and computing power, leading to significant geopolitical risks. This concentration fosters monopolies and exacerbates the digital divide. The role of Facebook’s algorithm in the Rohingya genocide in Myanmar from 2016-17 illustrates this danger. Programmed to promote user engagement for advertising revenue, the self-learning algorithm began normalizing hateful content, contributing to the murder of thousands and the displacement of over 700,000 individuals.

Need for Regulation

Despite the potential of AI to enhance human welfare, it equally possesses the capacity for immense harm. The release of ChatGPT in 2022 served as a wake-up call, highlighting AI’s ability to spread misinformation and undermine democratic processes, prompting calls for regulation.

The European Union took the lead in this initiative by passing the EU Artificial Intelligence Act in August 2024, set to be fully operational by August 2026. This Act adopts a “risk-based” approach, categorizing AI applications based on their risks and establishing obligations to ensure safety, transparency, and non-discrimination. High-risk AI systems will require continuous monitoring, while those deemed unacceptable will be outright banned.

Global Responses to AI Regulation

As AI regulation evolves, various global responses emerge. The Trump administration in the U.S. has pushed for deregulation, emphasizing American leadership in AI, while the 2025 World AI Conference saw China introduce a Global AI Governance Action Plan focused on international cooperation and safety regulations. Meanwhile, Brazil is nearing the enactment of its own risk-based AI regulation bill, while India is in the early stages of developing a regulatory framework.

Conclusion

The EU Act highlights gaps in accountability and safety compliance, calling for systems that clarify liability when AI systems cause harm. India could potentially excel in its regulatory approach by addressing these gaps, focusing on context-driven criteria for risk evaluation and creating a national AI office to oversee compliance.

As the conversation around AI regulation continues, it is imperative to strike a balance between innovation and the protection of citizens’ rights and data privacy.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...