AI Governance: Bridging the Leadership Gap

Stewarding AI: Governance Needs to Catch Up

We are firmly entrenched in the era of intelligent machines. Not long ago, artificial intelligence (AI) was barely a footnote in the working lives of most professionals. But that changed rapidly in 2023 with the release of powerful generative AI (GenAI) models. These tools not only mimic human language but produce work outputs that rival – and often surpass – those of seasoned professionals.

AI is no longer just an operational assistant; it is a strategic disruptor. As AI becomes central to how work is done, a critical shift is unfolding – one that calls for boards and senior leaders to radically rethink leadership, oversight, and responsibility.

The question confronting boardrooms now is urgent and profound: how do leaders guide organisations when machines can make decisions, design strategy, and even challenge human authority?

AI’s Surge into the Mainstream

The velocity of AI’s progress is striking. In late 2022, ChatGPT 3.5 could not pass basic accounting exams. By early 2023, GPT-4 was outperforming humans in certified public accountant and certified management accountant assessments. By some estimates, AI could automate up to 60 percent of tasks performed by degree-holding professionals – and perhaps up to 98 percent by 2030.

This is not just about efficiency; it is a redefinition of professional excellence. For boards and executive leaders, the implications are existential. Competence, judgment, and foresight must be re-evaluated in light of what machines can do.

The Human-AI Performance Gap

Human thought operates at around 10 bits per second, while our senses absorb billions of bits. Wi-Fi alone transfers data at 50 million bits per second. AI systems, by contrast, analyse immense data sets in parallel, rendering decisions in milliseconds. In chess, for example, a grandmaster evaluates a handful of possible future moves; an AI engine considers millions – simultaneously.

This raw processing power does not just change what AI can do, it alters how people must work alongside it. The real challenge is not using the tools; it is adapting the very structure of human cognition and decision making to meaningfully engage with machines that ‘think’ at speeds we cannot.

Preparing for an AI-Infused Future

Upskilling is necessary, but insufficient. AI-enhanced work demands more than learning how to prompt a model or query a system. It requires a deep recalibration of how professionals approach communication, leadership, ethics, and adaptability.

Writing, for instance, is evolving, rather than disappearing. In an AI world, writing must be strategic, purposeful, and ethical. Leaders must use writing not just to convey information, but to inspire, navigate ambiguity, and build trust in machine-mediated communication.

A New Leadership Test – Ethics, Vision, and Responsibility

Boards are now facing a fundamental test of their leadership. As AI becomes embedded across business functions, from supply chain optimisation through to marketing analytics and strategic forecasting, oversight cannot be an afterthought.

Ethical stewardship is no longer a ‘nice to have’; it is a business imperative. This begins with data privacy. Boards must be accountable for how customer and employee data is collected, used, and protected. It extends to algorithmic bias, which can skew decisions in recruitment, lending, and service provision. And it includes the impact on jobs, culture, and employee relationships.

Oversight in the AI age demands moral courage and strategic clarity. Boards cannot just be technology-aware – they must be ethically grounded and future-oriented. They must ensure that AI governance incorporates a clear framework for ethical evaluation, whether through virtue ethics (doing what is morally right), deontology (following rules), or consequentialism (evaluating outcomes). Decisions must align with the organisation’s purpose, stakeholder interests, and societal values.

The Regulatory Void

Despite AI’s pervasive impact, regulation has lagged. With the exception of targeted rules in autonomous vehicles or China’s pioneering AI laws in 2023, most jurisdictions remain unprepared. Yet regulation is essential – not to stifle innovation, but to protect human dignity, a core value that underpins democratic societies. AI’s capacity for autonomous decision making and unpredictability places it beyond the scope of traditional regulatory models designed for static IT systems.

AI is not just another digital tool. Its autonomy and opacity mean we must rethink the foundations of how we regulate, evaluate risk, and assign responsibility. Key regulatory challenges include: (i) foreseeability – AI’s unpredictable behaviour can lead to unintended consequences; (ii) control – systems may act beyond the authority of their developers or legal owners; (iii) modularity – AI components can be developed by dispersed actors, limiting oversight; and (iv) opacity – regulators often lack visibility into AI systems’ inner workings.

The combined effect is a ‘fallibility gap’, a space in which AI decisions can go unanticipated, unregulated, and unaccounted for. Without adequate safeguards, we risk outsourcing critical decisions to systems that cannot be questioned or held responsible.

Boards Must Lead on Ethics

In the face of regulatory delay, boards must act pre-emptively. Leadership, especially at the board level, must model ethical engagement with AI technologies. This includes creating a culture where AI’s decisions are explainable, ensuring AI benefits are shared, not hoarded by the few, designing AI that works for everyone – across age, ability, and demographic, and assigning clear ownership for AI outcomes, even when they are automated.

Boards should push for internal ethical standards that exceed existing laws. Think of it as corporate conscience: self-regulation guided by values, not just profit. Employees need reassurance that AI will not be used simply to replace them, but to empower and extend their potential.

Workplace Culture in an AI Era

AI changes more than tasks – it changes relationships. Trust, the currency of organisational cohesion, can erode when expectations are not met. The introduction of AI into workflows brings new psychological dynamics. Who is the decision maker – the manager or the algorithm? Can data be trusted? Will AI impact career trajectories?

The informal, often unspoken expectations employees have of their employers are at risk of fracturing. If these are breached, performance suffers and disengagement rises. Boards must recognise AI as a participant in this ecosystem and ensure that AI-enhanced workplaces are human-centric.

Visionary Leadership: Human Values in a Machine World

The irony of AI’s rise is that it requires more humanity in leadership, not less. As machines automate logic, what remains profoundly human are empathy, creativity, judgment, and purpose. Boards must encourage these traits – not just at the C-suite, but throughout the organisation.

Leadership in the AI era should embrace understanding employee anxieties and aspirations, encouraging innovative problem solving beyond the algorithm, facilitating human-AI teams that amplify, not replace, people, and investing in skills, values, and wellbeing, not just efficiency.

AI has enormous potential to elevate performance. But this only happens when systems are deployed ethically, relationships are nurtured, and leadership is grounded in trust and vision.

Policy and Collaboration: A Call for Global Governance

AI’s global nature complicates national regulation. No single government can regulate AI in isolation, and failure to act collectively leaves humanity vulnerable to systemic risks, from economic inequality to algorithmic injustice.

Legislatures offer democratic legitimacy; agencies provide technical expertise. But both must work together – and internationally – to build robust frameworks that protect people and foster innovation.

The regulatory goal should be to preserve human dignity – ensuring that as AI expands, we do not reduce individuals to mere data points. Our capacity for autonomy, moral agency, and self-worth must remain intact.

Are Boards Ready?

GenAI is not a passing trend; it is a fundamental transformation. Boards that understand this will lead with both clarity and conscience. That means building AI strategies that are inclusive, transparent, and ethically anchored. It means empowering teams, not replacing them. And it means shaping a digital future where humans and machines coexist – not in competition, but in collaboration.

So, the question stands: are today’s boards ready for this new reality? The answer will define not just the success of individual organisations, but the integrity and resilience of the society we are building.

More Insights

Chinese AI Official Advocates for Collaborative Governance to Bridge Development Gaps

An AI official from China emphasized the need for a collaborative and multi-governance ecosystem to promote AI as a public good and bridge the development gap. This call for cooperation highlights the...

Mastering Risk Management in the EU AI Act

The EU AI Act introduces a comprehensive regulation for high-risk AI systems, emphasizing a mandatory Risk Management System (RMS) to proactively manage risks throughout the AI lifecycle. This...

Switzerland’s Approach to AI Regulation: A 2025 Update

Switzerland's National AI Strategy aims to finalize an AI regulatory proposal by 2025, while currently, AI is subject to the Swiss legal framework without specific regulations in place. The Federal...

Mastering AI Compliance Under the EU AI Act

As AI systems become integral to various industries, the EU AI Act introduces a comprehensive regulatory framework with stringent obligations based on four defined risk tiers. This guide explores AI...

Mastering AI Compliance Under the EU AI Act

As AI systems become integral to various industries, the EU AI Act introduces a comprehensive regulatory framework with stringent obligations based on four defined risk tiers. This guide explores AI...

The Hidden Dangers of Shadow AI Agents

The article discusses the importance of governance for AI agents, emphasizing that companies must understand and catalogue the AI tools operating within their environments to ensure responsible use...

EU AI Act Compliance: Key Considerations for Businesses Before August 2025

The EU AI Act establishes the world's first comprehensive legal framework for the use and development of artificial intelligence, with key regulations set to take effect in August 2025. Companies must...

AI Governance: Bridging the Leadership Gap

As we advance into the era of intelligent machines, organizations are compelled to rethink leadership and oversight due to AI's capacity to make decisions and design strategies. The urgency for...

AI Governance: Bridging the Leadership Gap

As we advance into the era of intelligent machines, organizations are compelled to rethink leadership and oversight due to AI's capacity to make decisions and design strategies. The urgency for...