AI Has a Governance Problem
Artificial intelligence (AI) is increasingly intertwined with real-world decision-making, influencing how online content is moderated, harmful behavior is flagged, and public agencies assess risk. AI has transitioned from a testing phase to a fundamental component of platform and institutional operations.
The Ethical Shift to Governance
For years, discussions surrounding responsible AI primarily revolved around ethical issues such as fairness, bias, transparency, and values. While these conversations remain crucial, many of the current failures in AI systems stem not merely from ethical disagreements or technical flaws, but from a lack of clear responsibility and weak oversight.
In essence, AI has evolved into a governance issue.
Failures in Governance Lead to AI Failures
Across various countries, AI is utilized to manage scale effectively. Social media platforms employ automated systems to process millions of posts daily, while public agencies apply AI to prioritize cases and monitor online harm. However, when failures occur, the immediate question often focuses on the accuracy of the model, overlooking the deeper governance issues.
Common governance gaps include:
- No clear owner for an AI system
- Limited oversight before deployment
- Weak escalation procedures when harm arises
- Divided responsibility among developers, deployers, and regulators
These gaps have been recognized in international policy discussions on AI accountability, including initiatives by the OECD and the WEF AI Governance Alliance.
Lessons from Online Harm and Content Moderation
Challenges surrounding AI governance were highlighted in a recent podcast discussing hate speech, deepfakes, and online safety. Researchers and regulators acknowledged the limitations of AI in moderation practices.
Content moderation operates in layers:
- Automated systems flag potential harm
- Human moderators review complex or disputed cases
- Regulators intervene when platforms fail to take action
Breakdowns occur when accountability is absent in one or more of these layers. Platforms may underinvest in local language and cultural context, and oversight may rely on complaints rather than proactive prevention. As a result, responsibility can shift among developers, platforms, and regulatory bodies.
Responsibility Gaps in AI Systems
The shared responsibility in AI systems used for online safety and public services illustrates why AI-related harm often results from multiple failures rather than a single issue. AI systems, built by one party and deployed by another, can experience lapses when ownership and oversight are disconnected.
The Critical Importance of Governance in Child Safety
The risks associated with AI are particularly pronounced when children are involved. AI-generated deepfakes and synthetic images complicate the detection of online abuse. Organizations like UNICEF warn that AI introduces new risks for children that cannot be mitigated by technology alone.
A notable incident occurred in January 2026, when the chatbot Grok faced global scrutiny for being misused to create non-consensual sexual images involving minors. This incident exemplifies how quickly harm can escalate from niche tools into mainstream platforms, highlighting a failure in governance rather than merely a failure in detection.
Public Sector AI: Hidden Risks
AI adoption in public services—including education, welfare, and enforcement—can have profound consequences. When public sector AI fails, it erodes trust in institutions. However, governance often lags behind adoption, with many systems introduced without independent review or clear accountability.
Public confidence can diminish rapidly when institutions cannot clearly answer the question: Who is responsible?
Defining Responsible AI in Practice
Responsible AI does not mean avoiding AI altogether; it requires proper governance. This includes:
- Clear ownership of each AI system
- Defined roles for oversight and review
- Documented decision-making and risk assessment processes
- Ongoing monitoring of real-world impacts
- The capability to pause or withdraw systems when harm emerges
It is crucial to recognize that not all risks can be addressed through improved models. Decisions regarding acceptable use, escalation, and enforcement require human judgment and leadership at senior levels.
From Discussion to Decision-Making
The conversation surrounding responsible AI has shifted from abstract discussions to concrete decision-making. Key questions now include:
- Who owns the system?
- Who oversees it?
- Who acts when harm begins to appear?
Institutions that cannot answer these questions will face regulatory, reputational, and trust risks, regardless of their technological advancements. As AI becomes more embedded in public life, responsible AI governance must be treated as a core responsibility—essential for building trust, mitigating harm, and enabling innovation in a socially acceptable manner.