Category: EU AI Compliance

EU AI Act: Impact on Generative AI Development

The EU AI Act represents a significant regulatory framework aimed at ensuring the safety and accountability of AI systems. It introduces a risk-based approach, defining requirements for transparency, data governance, and oversight for generative AI developers and other stakeholders in the AI supply chain.

Read More »

AI Compliance Essentials for CTOs Under the EU Act

The EU AI Act is now in effect, introducing several obligations for AI providers, particularly those offering general-purpose AI (GPAI). To ensure compliance, companies should maintain a detailed inventory of their AI models, datasets, prompts, and decision logs, while adhering to technical documentation and copyright transparency requirements.

Read More »

Atlassian’s Commitment to Responsible AI: Progress and Insights

In September 2024, Atlassian joined over 100 industry leaders in Brussels to pledge commitment to responsible AI practices as part of the EU AI Pact. A year later, they released a report detailing their progress and key actions related to their five pledges, emphasizing accountability and the importance of AI governance.

Read More »

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy. Effective AI governance not only ensures accountability and adherence to ethical principles but also accelerates innovation and productivity within organizations.

Read More »

AI Regulation and Its Impact on Arbitration Enforcement in the EU

The EU AI Act establishes stringent regulations for artificial intelligence, categorizing AI tools used in legal decision-making, including arbitration, as high risk. Non-compliance with these regulations may result in courts refusing to recognize or enforce arbitral awards based on procedural or public policy grounds.

Read More »

Implementing AI Governance for EU Compliance

AI governance involves establishing rules and policies to ensure responsible AI use, preventing potential risks associated with AI deployment. The EU AI Act categorizes AI systems by risk levels and outlines necessary compliance measures for companies using high-risk AI applications.

Read More »

Classifying Your AI System Under the EU AI Act Made Easy

The EU AI Act categorizes AI systems into four risk levels: Unacceptable, High-risk, Limited, and Minimal. Genbounty offers a free Risk Classification Wizard to help teams quickly determine their system’s category and understand the corresponding obligations for compliance.

Read More »

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI systems. She emphasizes the importance of balancing innovation with regulation, particularly in AI procurement, to ensure responsible practices in the rapidly evolving technology landscape.

Read More »

Critical Evaluations of AI Compliance Under the EU Act

The EU’s Artificial Intelligence Act introduces new obligations for organizations regarding general-purpose AI models, set to take effect in August. Dealmakers must enhance their due diligence processes to ensure compliance and understand the risks associated with AI systems in the context of mergers and acquisitions.

Read More »

Exploring Trustworthiness in Large Language Models Under the EU AI Act

This systematic mapping study evaluates the trustworthiness of large language models (LLMs) in the context of the EU AI Act, highlighting their capabilities and the challenges they face. The research identifies significant gaps in the application of trustworthiness principles across various high-stakes domains, emphasizing the need for further exploration and development.

Read More »