AI Standards and Regulations: Bridging the Gap for Responsible Innovation

The Interplay Between AI Standards and Regulations

Global jurisdictions are increasingly open to considering policies to ensure that artificial intelligence (AI) systems are used and developed responsibly while balancing safety and innovation. However, the race to the top of the global AI market is outpacing regulation, leaving companies vulnerable to risks associated with streamlined implementation plans.

Current Developments in Regulatory Frameworks

During the AI Standards Hub Global Summit 2026, stakeholders noted that while regulatory frameworks such as the EU AI Act continue to evolve, organizations are focusing their efforts on technical standards, assurance systems, and other tools to set compliance practices on the right course.

According to Sara Rendtorff-Smith, Head of the OECD Division on AI and Emerging Digital Technologies, organizational and industry standards are “essential to governing AI well” and serve as the “quiet infrastructure of innovation,” enabling safe and responsible scaling of AI across economies and societies.

The Role and Perception of Standards

Rendtorff-Smith emphasized that the ability to govern effectively on a global scale must grow as quickly as AI technologies are advancing. International cooperation remains crucial for balancing sector-specific standards and regulations.

However, standards are perceived differently across jurisdictions. Some EU organizations argue they cannot effectively comply with the AI Act without the industry standards promised before implementation deadlines. Conversely, U.S. organizations rely on the National Institute of Standards and Technology’s suite of AI standards, including the AI Risk Management Framework and the AI Agent Standards Initiative, amid a patchwork of state laws and no comprehensive federal law.

Emerging Priorities in Standards Development

Tailoring standards to common practices and policies is becoming an emerging priority. The OECD is keeping stakeholders informed through its AI Policy Observatory, tracking over 2,000 AI policies across more than 80 jurisdictions. Similarly, the IAPP tracks developments through its Global AI Law and Policy Tracker.

The variance across global proposals, along with the fast-evolving nature of AI, is placing new pressures on standards development. David Bell, Standards Policy Director at the British Standards Institution, remarked, “AI is testing the system like nothing else ever has.” He indicated that standards must adapt to the fundamental changes AI brings to work processes.

Collaborative Enforcement and Best Practices

Collaborative enforcement serves as a reference for best practices in standards building. Rendtorff-Smith warned that insufficient global enforcement and a “very fragmented landscape” jeopardize the foundation that standards require for strength and adaptability. This fragmentation could lead to significant compliance costs for businesses, barriers to cross-border deployment, and stifled innovation.

Complementary Roles of Standards and Regulation

Florian Ostmann, Distinguished Policy Fellow at the London School of Economic and Political Science, highlighted the interrelationship between technical standards and enforcement efforts in supporting responsible AI safeguards. He stated that while standards facilitate the implementation and compliance with regulation, they also perform functions that regulation cannot.

Next Steps for AI Standards and Regulations

Stakeholders suggest that enforcement efforts should focus on enhancing coordination across regulatory and technical tools while addressing potential gaps in AI implementation. Luis Aranda, OECD AI Senior Economist, noted the need for organizations to measure their compliance and data protection safeguards. He emphasized that while standards and regulations define expectations for trustworthy AI, consistent methods for assessing system performance remain limited.

Aranda stressed the importance of developing inclusive and globally representative governance approaches, which require shared foundations, concepts, and definitions. He acknowledged concerns about specific AI regulations potentially hindering innovation, particularly in light of the ongoing global AI race.

“No country wants to be the first runner-up while everyone else is sprinting ahead,” Aranda added, reflecting the timing concerns that contribute to the ongoing evolution of national AI initiatives.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...