Time to Regulate AI
Artificial Intelligence (AI) is heralded as the most transformative technology of our era, influencing every facet of human activity. However, this immense power is accompanied by significant risks. The opacity of AI algorithms exacerbates these dangers, leading to instances where AI has caused substantial harm to society. Such harms can propagate rapidly, offering little opportunity for correction or redirection.
Risks of AI
The risks associated with AI extend to systemic instability, as demonstrated by AI-driven financial “flash crashes”. A notable example occurred on May 6, 2010, when the Dow Jones Industrial Average plummeted by over 1,000 points within ten minutes, erasing approximately $1 trillion in equity, although 70% of the losses were recovered by the end of the trading day. If left unregulated, these risks threaten to undermine trust in institutions and destabilize markets.
Perhaps more alarming is AI’s potential to inflict physical violence. In 2020, a UN report indicated that Turkish-made Kargu-2 drones, utilizing AI-based image recognition, may have engaged human targets without direct human oversight. This incident marked a possible first for autonomous lethal force.
Reports from Gaza in 2023–24 suggested that Israel employed an AI system named “Lavender” to automatically generate target lists for bombing campaigns, potentially resulting in unintended civilian casualties by lowering the threshold for strikes and challenging the morality, legality, and accountability of using violence against civilians.
Bias and Inequality
AI systems are prone to inheriting the biases of their creators and often reflect hidden biases within their training data. This replication of human prejudices can deepen societal inequalities. A case in point is the COMPAS algorithm used in U.S. courts to predict reoffending risks. This system disproportionately labeled Black defendants as “high risk” compared to white defendants, affecting bail and sentencing decisions.
Similarly, Amazon scrapped an AI hiring tool after discovering it discriminated against female applicants, and in 2019, Apple faced criticism when its credit card algorithm provided significantly lower credit limits to women compared to men with identical financial backgrounds.
Corporate Control and Geopolitical Risks
A handful of large U.S. corporations dominate AI resources and computing power, leading to significant geopolitical risks. This concentration fosters monopolies and exacerbates the digital divide. The role of Facebook’s algorithm in the Rohingya genocide in Myanmar from 2016-17 illustrates this danger. Programmed to promote user engagement for advertising revenue, the self-learning algorithm began normalizing hateful content, contributing to the murder of thousands and the displacement of over 700,000 individuals.
Need for Regulation
Despite the potential of AI to enhance human welfare, it equally possesses the capacity for immense harm. The release of ChatGPT in 2022 served as a wake-up call, highlighting AI’s ability to spread misinformation and undermine democratic processes, prompting calls for regulation.
The European Union took the lead in this initiative by passing the EU Artificial Intelligence Act in August 2024, set to be fully operational by August 2026. This Act adopts a “risk-based” approach, categorizing AI applications based on their risks and establishing obligations to ensure safety, transparency, and non-discrimination. High-risk AI systems will require continuous monitoring, while those deemed unacceptable will be outright banned.
Global Responses to AI Regulation
As AI regulation evolves, various global responses emerge. The Trump administration in the U.S. has pushed for deregulation, emphasizing American leadership in AI, while the 2025 World AI Conference saw China introduce a Global AI Governance Action Plan focused on international cooperation and safety regulations. Meanwhile, Brazil is nearing the enactment of its own risk-based AI regulation bill, while India is in the early stages of developing a regulatory framework.
Conclusion
The EU Act highlights gaps in accountability and safety compliance, calling for systems that clarify liability when AI systems cause harm. India could potentially excel in its regulatory approach by addressing these gaps, focusing on context-driven criteria for risk evaluation and creating a national AI office to oversee compliance.
As the conversation around AI regulation continues, it is imperative to strike a balance between innovation and the protection of citizens’ rights and data privacy.