What Boards Should Demand from AI: Assessment, Audit, and Assurance
In today’s rapidly evolving technological landscape, effective governance of Artificial Intelligence (AI) has emerged as a critical priority for organizational boards. This involves not only overseeing various technology projects but also ensuring that AI-enabled decisions align with the organization’s strategy, risk appetite, and ethical standards.
Understanding Verification in AI Systems
A crucial question for boards is: beyond setting expectations, how can an organization verify that its AI systems are performing as intended and responsibly? The answer lies in three interrelated disciplines: AI risk assessment, AI audit, and AI assurance. Boards familiar with financial oversight will find this logic intuitive, yet the challenge is applying the same rigor to AI.
Three Distinct but Related Concepts
To avoid confusion, it is essential to clarify the meaning of each term:
AI Risk Assessment
AI risk assessment is the internal process by which an organization identifies, evaluates, and prioritizes the risks associated with its AI systems. It involves asking critical questions such as: What could go wrong? How likely is it? What would the impact be? This foundational assessment is crucial, as it informs both audit and assurance processes.
Material AI systems exist across various sectors, including:
- A credit scoring model in a bank
- A patient triage tool in a hospital
- A student performance predictor in a university
- A case prioritization system in a government agency
Risk assessments must be systematic, documented, and revisited regularly as models evolve and operating environments change.
AI Audit
AI audit refers to the independent examination of whether an AI system, or its governance framework, conforms to defined standards, policies, or requirements. This evidence-based process requires an independent party to assess compliance with standards such as ISO/IEC 42001, which was published in 2023 as the world’s first AI management system standard.
The auditing profession is beginning to formalize accountability for AI engagements, including standards for auditors established in ISO/IEC 42006, published in July 2025.
AI Assurance
AI assurance is the formal conclusion that emerges from the audit process, offering a professional opinion that instills confidence in boards, regulators, investors, and the public regarding the AI system or management framework’s compliance with defined standards.
Grounding AI Assurance
The concept of independent assurance is not new to boards. For instance, external auditors examine an organization’s financial statements annually and issue opinions based on evidence and established standards.
The same principle applies to AI. When organizations make public claims about their AI systems—such as fairness, transparency, or compliance—the question remains: who independently validates these claims?
The answer lies in ISAE 3000, the International Standard on Assurance Engagements issued by the International Auditing and Assurance Standards Board (IAASB). This standard governs assurance engagements on non-financial matters, making it suitable for AI assurance.
Key Implications for Boards
Three practical implications arise from this framework:
- Rigorous AI Risk Assessments: Boards should ensure their organizations conduct comprehensive risk assessments on material systems. This is not a one-time exercise but a continuous process that adapts as models retrain and regulatory environments evolve.
- Internal vs. External AI Audit: While internal audits are crucial for providing assurance, boards should also consider independent third-party audits, especially for systems that significantly impact customers and the public.
- Credible Assurance: As organizations publicly commit to AI practices, boards must confirm that these commitments are supported by credible independent assurance. Assertions without validation pose reputational risks.
Building Capabilities in AI Assurance
It is important to acknowledge that the infrastructure for AI assurance is still in development. The necessary professional standards and auditor competencies in AI, such as machine learning and algorithmic bias, are not yet uniformly established.
Organizations not ready for formal assurance should still undertake structured assessments of material AI systems. This builds the internal discipline and governance required for future assurance readiness.
By commissioning these assessments now, boards can prepare for heightened regulatory expectations and stakeholder scrutiny, shaping responsible AI accountability in their sectors.
The journey toward effective AI assurance is akin to the evolution of financial auditing, which took decades of standard-setting and learning from corporate failures to establish credibility. Boards that engage with AI assurance today will not only mitigate their risks but also contribute to defining accountability standards in the evolving landscape of AI.