Regulating AI: Balancing Innovation and Safety

Time to Regulate AI

Artificial Intelligence (AI) is heralded as the most transformative technology of our era, influencing every facet of human activity. However, this immense power is accompanied by significant risks. The opacity of AI algorithms exacerbates these dangers, leading to instances where AI has caused substantial harm to society. Such harms can propagate rapidly, offering little opportunity for correction or redirection.

Risks of AI

The risks associated with AI extend to systemic instability, as demonstrated by AI-driven financial “flash crashes”. A notable example occurred on May 6, 2010, when the Dow Jones Industrial Average plummeted by over 1,000 points within ten minutes, erasing approximately $1 trillion in equity, although 70% of the losses were recovered by the end of the trading day. If left unregulated, these risks threaten to undermine trust in institutions and destabilize markets.

Perhaps more alarming is AI’s potential to inflict physical violence. In 2020, a UN report indicated that Turkish-made Kargu-2 drones, utilizing AI-based image recognition, may have engaged human targets without direct human oversight. This incident marked a possible first for autonomous lethal force.

Reports from Gaza in 2023–24 suggested that Israel employed an AI system named “Lavender” to automatically generate target lists for bombing campaigns, potentially resulting in unintended civilian casualties by lowering the threshold for strikes and challenging the morality, legality, and accountability of using violence against civilians.

Bias and Inequality

AI systems are prone to inheriting the biases of their creators and often reflect hidden biases within their training data. This replication of human prejudices can deepen societal inequalities. A case in point is the COMPAS algorithm used in U.S. courts to predict reoffending risks. This system disproportionately labeled Black defendants as “high risk” compared to white defendants, affecting bail and sentencing decisions.

Similarly, Amazon scrapped an AI hiring tool after discovering it discriminated against female applicants, and in 2019, Apple faced criticism when its credit card algorithm provided significantly lower credit limits to women compared to men with identical financial backgrounds.

Corporate Control and Geopolitical Risks

A handful of large U.S. corporations dominate AI resources and computing power, leading to significant geopolitical risks. This concentration fosters monopolies and exacerbates the digital divide. The role of Facebook’s algorithm in the Rohingya genocide in Myanmar from 2016-17 illustrates this danger. Programmed to promote user engagement for advertising revenue, the self-learning algorithm began normalizing hateful content, contributing to the murder of thousands and the displacement of over 700,000 individuals.

Need for Regulation

Despite the potential of AI to enhance human welfare, it equally possesses the capacity for immense harm. The release of ChatGPT in 2022 served as a wake-up call, highlighting AI’s ability to spread misinformation and undermine democratic processes, prompting calls for regulation.

The European Union took the lead in this initiative by passing the EU Artificial Intelligence Act in August 2024, set to be fully operational by August 2026. This Act adopts a “risk-based” approach, categorizing AI applications based on their risks and establishing obligations to ensure safety, transparency, and non-discrimination. High-risk AI systems will require continuous monitoring, while those deemed unacceptable will be outright banned.

Global Responses to AI Regulation

As AI regulation evolves, various global responses emerge. The Trump administration in the U.S. has pushed for deregulation, emphasizing American leadership in AI, while the 2025 World AI Conference saw China introduce a Global AI Governance Action Plan focused on international cooperation and safety regulations. Meanwhile, Brazil is nearing the enactment of its own risk-based AI regulation bill, while India is in the early stages of developing a regulatory framework.

Conclusion

The EU Act highlights gaps in accountability and safety compliance, calling for systems that clarify liability when AI systems cause harm. India could potentially excel in its regulatory approach by addressing these gaps, focusing on context-driven criteria for risk evaluation and creating a national AI office to oversee compliance.

As the conversation around AI regulation continues, it is imperative to strike a balance between innovation and the protection of citizens’ rights and data privacy.

More Insights

Responsible AI in Finance: From Theory to Practice

The global discussion around artificial intelligence in finance has shifted towards responsible usage, emphasizing the importance of trust, compliance, and education. Startups like WNSTN AI are...

Building Trust in AI Through Certification for a Sustainable Future

The article discusses how certification can enhance trust in AI systems, transforming regulation from a constraint into a competitive advantage in the market. With frameworks like the EU's AI Act...

Trust in Explainable AI: Building Transparency and Accountability

Explainable AI (XAI) is crucial for fostering trust and transparency in critical fields like healthcare and finance, as regulations now require clear explanations of AI decisions. By empowering users...

Regulating AI: Balancing Innovation and Safety

Artificial Intelligence (AI) is a revolutionary technology that presents both immense potential and significant risks, particularly due to the opacity of its algorithms. Without regulation, AI can...

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...