Driving Ethical AI Compliance in Europe

CERTAIN Drives Ethical AI Compliance in Europe

In response to the increasing regulations like the EU AI Act, the EU-funded initiative CERTAIN aims to drive ethical AI compliance across Europe. Launched in January 2025, CERTAIN stands for “Certification for Ethical and Regulatory Transparency in Artificial Intelligence”, emphasizing the development of tools and frameworks that promote transparency, compliance, and sustainability in AI technologies.

Led by Idemia Identity & Security France in collaboration with 19 partners across ten European countries, CERTAIN seeks to establish a model that could serve as a blueprint for global AI governance.

Driving Ethical AI Practices in Europe

According to a senior researcher at the St. Pölten University of Applied Sciences (UAS), the primary goal of CERTAIN is to tackle crucial regulatory and ethical challenges within AI development. The initiative aims to create tools that ensure AI systems are transparent and verifiable, adhering to the requirements set forth by the EU’s AI Act.

By developing practically feasible solutions, CERTAIN intends to assist companies in fulfilling regulatory requirements efficiently while bolstering trust in AI technologies.

CERTAIN’s efforts are focused on creating user-friendly tools and guidelines that simplify compliance with complex AI regulations, thereby helping public and private organizations navigate these rules effectively.

Harmonising Standards and Improving Sustainability

One of the core objectives of CERTAIN is to establish consistent standards for data sharing and AI development throughout Europe. By setting industry-wide norms for interoperability, the project seeks to enhance collaboration and efficiency in the application of AI-driven technologies.

This effort not only targets compliance but also aims to unlock new opportunities for innovation. CERTAIN’s solutions will facilitate the creation of open and trustworthy European data spaces, which are vital for driving sustainable economic growth.

In alignment with the EU’s Green Deal, CERTAIN is committed to sustainability. It recognizes the significant environmental challenges posed by AI technologies, such as high energy consumption and resource-intensive data processing. To address these issues, CERTAIN promotes the development of energy-efficient AI systems and advocates for eco-friendly data management practices.

A Collaborative Framework to Unlock AI Innovation

A unique aspect of CERTAIN lies in its approach to fostering collaboration and dialogue among various stakeholders. The project team at St. Pölten UAS actively engages with researchers, tech companies, policymakers, and end-users in co-developing, testing, and refining ideas, tools, and standards.

This collaborative exchange extends beyond product development; CERTAIN also serves as a central authority, keeping stakeholders informed about legal, ethical, and technical matters related to AI and certification.

By maintaining open communication channels, CERTAIN ensures that its outcomes are both practical and widely adopted across the industry.

Ensuring Compliance with Ethical AI Regulations in Europe

As the EU’s AI Act approaches implementation, the guidelines and tools developed under CERTAIN will be crucial. The Act will impose strict requirements on AI systems, particularly those categorized as “high-risk”, such as applications in healthcare, transportation, and law enforcement.

While these regulations aim to enhance safety and accountability, they also present challenges for organizations striving to comply.

CERTAIN endeavors to ease these challenges by providing actionable solutions that align with Europe’s legal framework while promoting innovation. In doing so, the project is poised to play a pivotal role in positioning Europe as a global leader in ethical AI development.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...