Key Considerations for AI Governance in the Boardroom

AI Governance: What Boards Need to Consider

As artificial intelligence (AI) dramatically increases an organisation’s operational tempo, it raises essential questions regarding governance practices. During the ICAEW’s Corporate Governance conference, the significance of AI governance was emphasized, highlighting the urgent need for boards to establish a strategic focus on this technology.

The Connection Between AI Use and Company Purpose

There exists a direct link between a company’s purpose and its use of AI. Boards must set the strategic tone not only to comply with regulations such as the EU AI Act but also to ensure that their applications of AI are ethically sound. The challenge lies in navigating the dual responsibilities of legal compliance and maintaining ethical comfort among employees and stakeholders.

Despite the rapid advancements in AI, tools are still susceptible to bias and hallucinations. Some applications present high risks, and AI’s nature can change over time, leading to potential issues like drift, where a tool’s rationale may deviate from its original design.

Board Engagement with AI

Industry leaders emphasize the need for boards to actively engage with AI technologies. Traditionally, the perception has been that AI falls under the purview of IT departments. However, this narrow view hinders progress and creates a culture of fear.

Pauline Norstrom, CEO of Anekanta AI, warns that excessive caution can prevent employees from experimenting with beneficial AI tools. She suggests that a better understanding of these technologies will alleviate fears and encourage responsible usage.

Tuomas Syrjainen, Co-founder of Futurice, urges board members to not only seek opinions about AI but to actively use the tools themselves. This hands-on approach fosters a better understanding of AI’s capabilities and encourages collaborative discussions about its applications in business.

The Importance of AI Literacy

AI literacy training is a foundational element of effective governance. Under the EU AI Act, this training becomes a legal requirement, highlighting its critical importance. Knowledge about datasets and model training enables professionals to evaluate AI outputs critically, ensuring alignment with their expectations.

Ethical Considerations: Sustainability and EDI

Amid discussions on AI governance, key ethical issues arise, such as the impact on job roles and sustainability. With the potential replacement of traditional roles by AI, there is a pressing need for professionals to adapt and evolve their identities into system builders who continuously improve AI solutions.

Furthermore, sustainability risks associated with AI usage, particularly its energy consumption, must be addressed. Companies should differentiate between tasks requiring deep reasoning versus those manageable with simpler queries, thereby reducing unnecessary energy usage.

On the topic of equality, diversity, and inclusion (EDI), Norstrom highlights the necessity for diverse perspectives in board decisions regarding AI governance. Current board compositions often reflect a single demographic, leading to skewed data interpretation and insights.

Guidance for AI Governance

Numerous standards and guidelines exist to assist boards in navigating AI governance. Resources such as ISO 42001 and the EU AI Act serve as valuable references for developing effective policies, risk management strategies, and AI inventories.

In summary, as AI technologies continue to evolve, the role of governance becomes increasingly crucial. Boards must embrace AI, fostering a culture of understanding and responsible usage to harness its full potential while mitigating ethical risks.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...