Preparing for the EU AI Act: Essential Steps for CIOs

How CIOs Can Prepare for EU AI Act Enforcement

The countdown to full enforcement of the European Union’s AI Act is progressing steadily, with the initial rules starting to apply. This act, which includes prohibitions on unacceptable use cases and AI literacy obligations, is set to come into effect for most organizations by next August, with enforcement commencing the following year. Many businesses are already perceived to be behind the curve, as outlined by industry experts.

According to a report, “The state of readiness is not great,” indicating that organizations are still grappling with the implications of the act. CIOs and technology leaders play a crucial role in steering their organizations toward compliance, which encompasses keeping pace with evolving requirements, managing vendors, and conducting risk assessments.

Failure to comply with the act can result in fines of up to $37.9 million (35 million euros), depending on the severity and duration of the infringement. Notably, providing incomplete or misleading information to enforcers can incur penalties of $8.1 million (7.5 million euros). These regulations are applicable to all businesses operating and serving customers within the EU, irrespective of their headquarters location.

Where to Start

Though the majority of the provisions will not fully activate for over a year, experts recommend that organizations do not adopt a wait-and-see approach. The implementation of the General Data Protection Regulation (GDPR) serves as a cautionary tale; many organizations only began addressing compliance in the last three months leading up to the deadline, resulting in a frantic rush to meet requirements.

CIOs can facilitate their organizations’ compliance with the EU AI Act, even if efforts have not yet begun. Starting promptly is crucial, as indicated by experts who suggest that initial actions should include:

  • Cataloging AI uses
  • Organizing a compliance team
  • Creating an AI literacy initiative

Identifying each instance of AI use is vital to ascertain whether a company’s applications fall under the EU’s list of prohibited uses. This step has proven effective for many large organizations.

Collaboration across departments is essential, as AI is often integrated throughout a business. This has led many organizations to form multidisciplinary teams to promote shared accountability in compliance efforts.

Vendor Management

As organizations develop their compliance strategies, managing vendor relationships becomes increasingly important. The shift toward in-house developed generative AI tools has grown, yet a significant portion of AI usage still relies on third-party vendors. This creates added complexity, particularly as vendors continue to enhance their offerings with AI capabilities.

CIOs have expressed concerns regarding AI washing and vendor-driven AI hype. The emergence of new features necessitates careful tracking to ensure compliance. It is imperative that businesses assess not just the products but also the individual features within them, and many vendors have yet to provide comprehensive lists of all AI systems in their products.

Moreover, organizations should be cautious about default settings when upgrading software, as compliance gaps can arise if vendors activate AI features automatically without explicit approval. Ensuring that vendors disable features by default can mitigate many compliance risks.

Keeping Up with Compliance

As organizations navigate the compliance landscape, it is equally important to establish a system for monitoring regulatory developments. Experts advocate for creating an internal timeline of key milestones and staying informed on the evolving legal landscape.

Various vendors and organizations are positioning themselves as resources to assist companies in achieving and maintaining compliance. Tools such as the EU AI Act assessment tool and platforms for conformity assessments are emerging to support enterprises in this transition.

Maintaining compliance with AI regulations is not a one-time endeavor; it requires ongoing monitoring, updates, and a structured process to ensure adherence to the evolving legal requirements.

Ultimately, organizations that prioritize compliance and transparency are more likely to find themselves in advantageous positions in the face of new regulations. Establishing robust processes for data management and accountability will provide a solid foundation for navigating future regulatory challenges.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...