The Impact of the EU AI Act on AI Reseller Deals
The European Union’s Artificial Intelligence Act (the AI Act) represents a significant milestone as the world’s first comprehensive regulation governing artificial intelligence. This act establishes obligations for companies operating within and outside the EU, particularly affecting those engaged in AI reseller deals.
Understanding the Obligations Under the AI Act
Companies’ responsibilities under the AI Act vary based on their involvement with AI tools and the associated risk levels. The key roles defined in the act include:
- Providers – Those who develop AI tools, typically facing stricter obligations.
- Deployers – Users of AI tools, usually subjected to less rigorous requirements.
For example, a company that modifies a low-risk AI tool to create a high-risk AI system must adhere to the more stringent obligations of a provider, despite their initial role as a deployer.
Extraterritorial Reach of the AI Act
Though an EU legislation, the AI Act extends its influence beyond EU borders. Non-EU companies using AI systems that affect individuals or operations within the EU are also bound by its rules. An instance of this requirement could be a U.S. company employing an AI tool to recruit employees for positions based in the EU; the act’s stipulations would apply due to the tool’s output being utilized within the EU.
Risk Levels and Compliance Requirements
The AI Act classifies AI systems into different risk categories:
- Unacceptable risk – Prohibited under the AI Act.
- High-risk – Subject to stringent compliance requirements.
- Limited risk – Obligated to adhere to transparency obligations.
- Minimal risk – Not subject to additional requirements.
For instance, providers of high-risk AI systems, such as those used in medical devices, must conduct conformity assessments and follow extensive compliance protocols regarding cybersecurity, privacy, and data governance.
Navigating Compliance in AI Reseller Deals
In reseller arrangements, the transformation of a deployer into a new provider can occur if modifications to the AI system are made. Companies must assess their roles and potential future uses at the contract negotiation stage. Situations leading to a new provider designation include:
- Branding a high-risk AI system under a new name or trademark.
- Making substantial modifications while retaining the high-risk classification.
- Changing the intended purpose of a non-high-risk AI system to a high-risk one.
In these situations, the original provider remains liable for the compliance of the unmodified AI tool, while the new provider assumes responsibility for the modified system.
Cooperation Obligations and Risk Management
The AI Act imposes cooperation obligations on the initial provider to assist the new provider in meeting compliance standards. This includes:
- Providing necessary documentation and logs as required by the AI Act.
- Granting technical access and assistance to fulfill compliance obligations.
To mitigate risks, initial providers may seek indemnities from new providers for any liabilities arising from changes made to the AI tool. Contractual clarity surrounding cooperation obligations is vital to ensure all parties understand their responsibilities.
Key Takeaways
To navigate the complexities introduced by the AI Act, companies involved in AI reseller deals should:
- Evaluate their specific roles and obligations under the act.
- Contractually define cooperation terms between providers and deployers.
- Implement measures to prevent unauthorized modifications that could elevate risk classifications.
With the AI Act now in force, understanding and adhering to its requirements is crucial for all stakeholders within the AI ecosystem.