AI Governance: Transparency, Ethics, and Risk Management in the Age of AI

Artificial intelligence is rapidly transforming our world, presenting unprecedented opportunities alongside complex challenges. As AI models become increasingly powerful and pervasive, questions surrounding their responsible development and deployment are paramount. This analysis delves into crucial aspects of AI governance, exploring specific commitments to transparency, ethical development, and robust risk management as outlined in a proposed framework. It examines the principles shaping this framework, the safeguards necessary for systemically significant models, and the essential steps for ensuring accountability and safety throughout the AI lifecycle.

What are the central commitments of the framework concerning transparency, model documentation, and copyright for general-purpose AI models?

This section of the General-Purpose AI Code of Practice addresses transparency, model documentation, and copyright compliance for general-purpose AI models (GPAI). It outlines specific commitments and measures that aim to align with Chapter V of the AI Act.

Transparency and Documentation

Commitment I.1: Documentation Providers commit to maintaining up-to-date model documentation, as stipulated in Article 53(1)(a) and (b) of the AI Act. This includes providing relevant information to downstream providers who integrate the GPAI model into their AI systems and to the AI Office upon request.

Key aspects of this commitment:

  • A user-friendly Model Documentation Form simplifies compliance and ensures easy documentation.
  • The document specifies clearly whether each item listed is intended for downstream providers, the AI Office, or national competent authorities.
  • Information intended for the AI Office or national competent authorities is only provided upon request, stating the legal basis and purpose.
  • Information for downstream providers should be made available to them proactively.
  • Providers are required to ensure the quality, security, and integrity of the documented information.

Exemption: These measures do not apply to providers of open-source AI models that meet the conditions specified in Article 53(2) of the AI Act, unless these models are classified as GPAI models with systemic risk.

Copyright Compliance

Commitment I.2: Copyright Policy To comply with Union law on copyright and related rights under Article 53(1)(c) of the AI Act, signatories commit to drawing up, keeping up-to-date, and implementing a copyright policy.

The elements of this commitment are:

  • Developing a policy to comply with Union law on copyright and related rights.
  • Identifying and complying with reservations of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790.
  • Adopting measures for GPAI models placed on the EU market, including:
  • Reproducing and extracting only lawfully accessible copyright-protected content when crawling the web.
  • Identifying and complying with rights reservations.
  • Obtaining adequate information about protected content that is not web-crawled by the Signatory.
  • Designating a point of contact and enabling the lodging of complaints.
  • Implementing measures to mitigate the risk of production of copyright-infringing output.

This Code of Practice seeks to assist AI providers in effectively complying with their obligations under the AI Act, ensuring a high level of transparency, and respecting copyright laws within the EU.

What are the fundamental principles guiding the development of the Code of Practice, and how do they influence its structure and content?

The General-Purpose AI Code of Practice aims to guide AI model development and deployment within the framework of the EU AI Act. Here’s a breakdown of the core tenets shaping its structure and content:

EU Values Alignment

The Code prioritizes adherence to core EU principles and values, ensuring alignment with the Charter of Fundamental Rights, the Treaty on European Union, and the Treaty on the Functioning of the European Union.

AI Act and International Harmonization

The Code facilitates the proper application of the AI Act, while taking into account international approaches, including standards and metrics developed by AI Safety Institutes and standard-setting organizations, per Article 56(1) of the AI Act.

Proportionality to Risks

The Code ties the stringency of commitments and measures to the level of risk, demanding more rigorous action when facing higher risk tiers or potential for severe harm. Specific strategies include:

  • Targeted Measures: Focusing on specific, actionable measures rather than broad, less-defined proxies.
  • Risk Differentiation: Tailoring risk assessment and mitigation strategies to different risk types, deployment scenarios, and distribution methods. For example, systemic risk mitigation might differentiate between intentional and unintentional risks.
  • Dynamic Updates: Referencing dynamic sources of information that providers can be expected to monitor in their risk assessment and mitigation, including incident databases, consensus standards, up-to-date risk registers, state-of-the-art risk management frameworks, and AI Office guidance.

Future-Proofing

Recognizing the rapid pace of technological advancement, the Code aims to remain relevant by:

  • Enabling Rapid Updates: Facilitating swift adaptation and updates to reflect technological and industry developments.
  • Referencing Dynamic Information: Pointing to dynamic information sources for risk assessment and mitigation, like state-of-the-art risk management frameworks.
  • Addressing Emerging Models: Considering additional measures for specific general-purpose AI models, including those used in agentic AI systems.

SME Support

The Code acknowledges the unique challenges faced by small and medium-sized enterprises (SMEs) and startups, and takes into account their constraints. Measures are to allow simplified compliance methods for SMEs that lack the resources of larger AI developers.

Ecosystem Support

The Code promotes cooperation and knowledge sharing among stakeholders through:

  • Sharing Resources: Enabling the sharing of AI safety infrastructure and best practices between model providers.
  • Stakeholder Engagement: Encouraging participation from civil society, academia, third parties, and government organizations.

Innovation in Governance and Risk Management

The Code encourages innovation by recognizing advancements in AI safety governance and evidence collection. Alternative approaches to AI safety that demonstrate equal or superior outcomes with less burden should be recognized and supported.

How should providers of general-purpose AI models with systemic risk define and implement a Safety and Security Framework?

For providers of general-purpose AI models with systemic risk (GPAISRs), establishing a robust Safety and Security Framework is paramount for adherence to regulations like the AI Act. This framework isn’t just a set of guidelines, it’s a dynamic system built to evaluate, mitigate, and govern risks associated with potentially hazardous AI models.

Core Components of the Framework

The framework should detail the systemic risk assessment, mitigation, and governance measures intended to keep systemic risks stemming from the GPAISRs within acceptable levels. The framework needs to include these components:

  • Systemic Risk Acceptance Criteria: Predefined benchmarks for determining whether systemic risks are acceptable. These criteria should:
    • Be defined for each identified systemic risk.
    • Include measurable systemic risk tiers.
    • Specify unacceptable risk tiers, especially without mitigation.
    • Align with best practices from international bodies or AI Office guidance.
  • Systemic Risk Assessment and Mitigation Procedures: Outline how the company will systematically evaluate risks at different points along the model lifecycle, especially before deployment.
  • Forecasting: For each systemic risk tier dependent on specific model capabilities, state estimates of timelines for when they reasonably foresee that they will have first developed a GPAISR that possesses such capabilities, if such capabilities are not yet possessed by any of the Signatory’s models already available on the market, to facilitate the preparation of appropriate systemic risk mitigations.
  • Technical Systemic Risk Mitigation: Signatories shall document in the Framework the technical systemic risk mitigations including security mitigations that are intended to reduce the systemic risk associated with the relevant systemic risk tier.
  • Governance Risk Mitigations: Detail governance structures, oversight mechanisms, and accountability frameworks for managing systemic risks.

Practical Implementation & Reporting

Implementing the framework involves a continuous process of evaluation, adaptation, and reporting. Key considerations include:

  • Regular Adequacy Assessments: Determine if the framework itself is effective in assessing and mitigating systemic risks.
  • Safety and Security Model Reports: These reports should document risk assessment results, mitigation strategies, and justifications for deployment decisions, submitted to the AI Office.
  • Transparency and External Input: the framework should consider input from external actors in its decision-making concerning systemic risks.
  • Serious Incident Reporting:Implement processes for tracking, documenting, and reporting to the AI Office relevant information about serious incidents throughout the entire model lifecycle and possible corrective measures to address them, with adequate resourcing of such processes relative to the severity of the serious incident and the degree of involvement of their model.
  • Public Transparency: Publish information relevant to the public understanding of systemic risks stemming from their GPAISRs, where necessary to effectively enable assessment and mitigation of systemic risks.

Challenges and Nuances

Navigating this landscape requires careful consideration of several factors:

  • Proportionality: Risk assessment and mitigation should be proportionate to the specific risks presented by the model.
  • Keeping Up with the State-of-the-Art: Implement state-of-the-art technical safety mitigations that best mitigate unacceptable systemic risks including general cybersecurity best practices so as to meet at least the RAND SL3 security goal
  • Collaboration: Sharing tools, practices, and evaluations with other organizations can improve overall safety and reduce duplication of efforts.
  • Multidisciplinary Model Evaluation Teams: Ensure that all model evaluation teams have the expertise to contribute to the model evaluation for systemic risk assessment.

Ethical Considerations

Finally, providers MUST not retaliate against any worker providing information about systemic risks stemming from the Signatories’ GPAISRs to the AI Office or, as appropriate, to national competent authorities, and to at least annually informing workers of an AI Office mailbox designated for receiving such information, if such a mailbox exists.

What are the crucial steps for identifying, analyzing, and mitigating systemic risks throughout the lifecycle of general-purpose AI models?

The EU’s proposed AI Code of Practice, designed to guide compliance with the AI Act, emphasizes a systematic approach to managing systemic risks associated with general-purpose AI models (GPAISRs). Here’s a breakdown of the critical steps, tailored for AI governance professionals:

1. Establishing a Safety and Security Framework

Providers of GPAISRs must adopt and implement a comprehensive Safety and Security Framework. This framework should detail systemic risk assessment, mitigation strategies, and governance measures designed to keep risks within acceptable levels. Key components of the framework include:

  • Systemic Risk Acceptance Criteria: Clearly defined and justified criteria for determining the acceptability of systemic risks, including measurable risk tiers.
  • Risk Mitigation Plans: Detailed descriptions of technical mitigations, their limitations, and contingency plans for scenarios where mitigations fail.

2. Systemic Risk Assessment and Mitigation (Lifecycle-Wide)

Conduct systemic risk assessments at appropriate points throughout the entire model lifecycle, starting during development. This process involves several key activities:

  • Planning Development: Implement a framework and begin assessing/mitigating risks when planning a GPAISR, or at the latest 4 weeks after notifying the AI Office.
  • Milestone Reviews: Assess and mitigate risks at documented milestones during development, such as after fine-tuning, expanding access, or granting new affordances. Implement procedures to quickly identify substantial risk changes.

3. Systemic Risk Identification

Select and further characterize systemic risks stemming from GPAISRs that are significant enough to warrant further assessment and mitigation. Crucial considerations include:

  • Taxonomy Adherence: Selecting risks from a defined taxonomy of systemic risks (e.g., cyber offense, CBRN risks, harmful manipulation).
  • Scenario Planning: Develop systemic risk scenarios to characterize the nature and sources. These should include potential pathways to harm and reasonably foreseeable misuses.

4. Systemic Risk Analysis

Conduct a rigorous analysis of identified systemic risks, estimating their severity and probability. The analysis should leverage multiple sources and methods:

  • Quantitative and Qualitative Estimates: Use quantitative and qualitative estimates of risk as appropriate along with systemic risk indicators to track progress towards risk tiers.
  • State-of-the-Art Evaluations: Run evaluations to adequately assess the capabilities, propensities and effects of GPAISRs, using a wide range of methodologies (e.g., red teaming, benchmarks).
  • Model-Independent Information: Gather insights from literature reviews, historical incident data, and expert consultations.

5. Risk Acceptance Determination

Compare the results of systemic risk analysis to pre-defined risk acceptance criteria to ensure proportionality. Use these comparisons to inform decisions about development, market release, and usage. If risks are deemed unacceptable:

  • Implement additional mitigations: Or do not make a model available on the market if applicable.
  • Restrict marketing: Withdraw, or recall a model from the market if applicable.

6. Safety and Security Mitigations (Technical)

Implement state-of-the-art technical safety mitigations that are proportionate to systemic risks, such as: filtering training data, monitoring inputs/outputs, fine-tuning to refuse certain requests, and implementing safeguards/security tools.
Specifically:

  • Implement general cybersecurity best practices.
  • Implement procedures to assess and test their security readiness against potential and actual adversaries. This includes tools like regular security reviews and bug bounty programs.

7. Governance and Documentation

Several governance measures are crucial to effectively manage and oversee the process:

  • Clear Responsibility Allocation: Define and allocate responsibility for managing systemic risk across all organizational levels.
  • Independent External Assessments: Obtain independent, external assessments of GPAISRs before placing them on the market.
  • Serious Incident Reporting: Set up processes to track, document, and report serious incidents to the AI Office without undue delay.
  • Model Reports: Create detailed Safety and Security Model Reports documenting risk assessments, mitigations, and justifications for market release.
  • Public Transparency: Publish information relevant to public understanding of systemic risks.

By diligently following these steps, organizations can better navigate the complex landscape of AI governance and foster a more responsible and trustworthy AI ecosystem.

What core principles should guide the implementation of tools and best practices for state-of-the-art model evaluation and system risk assessment for all models?

The European Union’s draft AI Code of Practice, aimed at providing a blueprint for compliance with the comprehensive AI Act, emphasizes several core principles for implementing state-of-the-art model evaluation and risk assessment. These apply specifically to General Purpose AI models with Systemic Risk (GPAISR), but provide valuable insights for all AI development. Here’s a breakdown for legal-tech professionals:

EU Principles and Values

All tools and practices must demonstrably align with the fundamental rights and values enshrined in EU law, including the Charter of Fundamental Rights.

Alignment with the AI Act and International Approaches

Model evaluation and risk assessment must directly contribute to the proper application of the AI Act. This means:

  • Referencing international standards and metrics, like those developed by AI Safety Institutes, in accordance with Article 56(1) of the AI Act.

Proportionality to Risks

The stringency of evaluation and mitigation measures must be directly proportional to the identified risks. This principle drives multiple key actions:

  • More stringent measures for higher-risk tiers or uncertain risks of severe harm.
  • Specific measures that clearly define how providers should meet obligations.
  • Differentiation of measures based on risk types, distribution strategies, deployment contexts, and other factors that influence risk tiers.

The AI Office will proactively monitor measures susceptible to circumvention or misspecification.

Future-Proofing

Given the rapid evolution of AI technology, tools and practices must facilitate rapid updates in light of technological advancements. This involves:

  • Referencing dynamic information sources, such as incident databases, consensus standards, risk registers, risk management frameworks, and AI Office guidance, that providers are expected to monitor.
  • Articulating additional measures for specific GPAI models (e.g., those used in agentic AI systems) as technology necessitates.

Proportionality to the Size of the Provider

Measures should account for the size and resources of the AI model provider. The AI Act acknowledges the value and necessity of simplified paths to compliance for small and medium-sized enterprises (SMEs) and startups.

Support and Growth of Safe, Human-Centric AI

The Code is designed to foster cooperation among stakeholders through shared safety infrastructure and best practices. Actions include:

  • Sharing safety infrastructure and best practices
  • Encouraging participation from civil society, academia, third parties, and government organizations.
  • Promoting transparency between stakeholders and increased knowledge-sharing efforts.

Innovation in AI Governance and Risk Management

The Code encourages providers to innovate and advance the state-of-the-art in AI safety governance. Alternative approaches that demonstrate equal or superior safety outcomes should be recognized and supported.

Commitment to Documentation and Transparency

Signatories of the code commit to drawing up and keeping up-to-date model documentation, including information regarding the training process and data used, that is publicly available.

What are the governance and reporting requirements that providers of GPAISRs must follow to ensure accountability and transparency?

The AI Act’s Code of Practice imposes significant governance and reporting obligations on General-Purpose AI model providers with Systemic Risk (GPAISRs) to foster accountability and transparency. These requirements are designed to ensure these models, given their high-impact capabilities, are developed and deployed responsibly.

Safety and Security Model Reports

A core requirement is the creation of a Safety and Security Model Report for each GPAISR before it’s made available on the market. This report must document:

  • Systemic risk assessment and mitigation results.
  • Justifications for decisions to release the model.

The level of detail required in the Model Report should be proportionate to the level of systemic risk the model poses. This allows the AI Office to understand how the provider is implementing its systemic risk assessment and mitigation measures. The report should define conditions under which the justifications for having deemed systemic risk to be acceptable would no longer hold true.

Documentation of Compliance and Risk Management

Beyond the Model Report, GPAISRs must meticulously document their compliance with the AI Act and the Code of Practice. That documentation includes:

  • Estimating if their AI model has met the classification conditions of being a GPAISR
  • The methodologies for identifying and addressing systematic risks, especially with regard to the sources of such risks.
  • The limitations and imprecision when testing and validating systematic risks.
  • The qualifications and level of both internal and external model review teams.
  • The rationale to justify the level of the systemic risks as acceptable.
  • How security and safety constraints are met, managed and followed, as well as, the steps taken to develop the procedures in place to monitor them.

It is critical to retain such documentation for a period of at least twelve months and beyond the retirement of the AI model.

Transparency About Intended Model Behavior

Model Reports must also specify the model’s intended behavior, for example:

  • The principles the model is designed to follow.
  • How the model prioritizes different kinds of instructions.
  • Topics on which the model is intended to refuse instructions.

Frameworks for Safety and Security

Signatories must prepare and maintain a Safety and Security Framework that details the systemic risk assessment, mitigation, and governance procedures. This framework must include systemic risk acceptance criteria that:

  • Are measurable.
  • Define systemic risk tiers linked to model capabilities, harmful outcomes, and quantitative risk estimates.
  • Identify systemic risk triggers and conditions that will call for mitigations to specific systemic risk.

Frameworks must be continuously improved, rapidly updated, and should dynamically reflect the current state-of-art in AI.

Notifications to the AI Office

GPAISRs are required to notify the AI Office of several key events:

  • When their general-purpose AI model meets the criteria for classification as a GPAISR.
  • Updates to their Safety and Security Framework.
  • The outcomes of adequacy assessments.
  • The release of a Safety and Security Model Report.

Such notifications are necessary for assessing if the code is appropriately followed and to ensure rapid compliance.

Post-Market Monitoring and Adaptation

Governance doesn’t end with pre-release reports; GPAISRs must conduct post-market monitoring to gather real-world data on their models’ capabilities and effects. If there are material changes to the system or the systemic risk landscape, providers must update their Model Reports and, when appropriate, reassess the situation, so that the model remains in compliance with the regulations.

External and Internal Assessment

In addition to internal monitoring, systemic risk assessment processes must include input from external actors, including government.

  • When a GPAISR is ready to be released on the market, the model must undergo an external assessment, for all systemic risks detected, before being released on the market.
  • After being released, a GPAISR requires a research program providing models with API access. Accesses should be granted to academics and external teams conducting work studying systematic risks and non-commercial activity.
  • Any work or feedback provided by academics and the teams should then be used to update the code, and documentation of current GPAISRs.

Independent Assessment

External Assessors should be used to make sure that bias is being accounted for in the process. The assessors must:

  • Possess the correct domain experience to assess and validate systematic risk.
  • Be technically versed and competent in conducting model validation.
  • Have implemented internal and external information systems, that are actively tested and have a current report to validate their integrity.

Non-retaliation and Risk Governance

Signatories are required to not retaliate in any form against workers who might share information or express concerns. They need to have safe infrastructure in practice to allow concerns to be raised freely, especially to the AI Office as a contact point.

What are the essential elements for a functional, independent assessment process of the AI model?

As the AI Act implementation date looms, legal-tech professionals and compliance officers are zeroing in on independent model assessments. What should providers of general-purpose AI models with systemic risk (GPAISRs) internalize to ensure a robust assessment process?

Independent External Assessments

Before placing a GPAISR on the market, providers must secure independent external systemic risk assessments, which include model evaluations, unless the model can be demonstrated as sufficiently safe. Post-market release, facilitating exploratory independent external assessments, including model evaluations, is crucial. This highlights the need for collaboration and transparency.

Selecting Independent Assessors

GPAISR providers should look for assessors who:

  • Have significant domain expertise, aligning with the risk domain being evaluated.
  • Possess the technical skills and experience to perform rigorous model evaluations.
  • Maintain robust internal and external information security protocols, suitable for the access level granted.

Providing Access and Resources

Providers must furnish independent external assessors with the access, information, time, and resources required to carry out effective systemic risk assessments. This can mean access to fine tuning capabilities, safe inference tools, and complete model documentation.

Maintaining Integrity

To ensure the validity of independent external assessments, those assessments must be performed without the improper influence of the provider. For instance, providers must avoid storing and analyzing model inputs and outputs from test runs without explicit permission.

Facilitating Post-Market Assessment

Providers must facilitate exploratory external research after GPAISR models are released, by implementing a research program providing API access to models with and without mitigations, allocating free research API credits for legitimate research, and contributing to a legal and technical safe harbor regime to protect assessors testing the model.

Important Considerations for SMEs

Small and medium-sized enterprises (SMEs) facing challenges in adhering to quality standards or cooperating with relevant stakeholders should notify the AI Office and seek assistance in finding suitable alternative means of fulfilling requirements.

Transparency and Disclosure

It’s important to strike a balance between public transparency and maintaining security by disclosing security mitigations and model evaluations with as much detail as possible, while implementing redactions to prevent increased systemic risk or sensitive economic information.

How can a healthy risk culture be fostered within organizations involved in developing and deploying GPAISRs?

Fostering a healthy risk culture is key for organizations developing and deploying General-Purpose AI models with Systemic Risk (GPAISRs). According to the draft Code of Practice, this involves several interconnected steps:

Defining and Allocating Responsibilities

For activities concerning systemic risk assessment and mitigation for their GPAISRs, Signatories commit to: (1) clearly defining and allocating responsibilities for managing systemic risk from their GPAISRs across all levels of the organisation; (2) allocating appropriate resources to actors who have been assigned responsibilities for managing systemic risk; and (3) promoting a healthy risk culture.

Specifically, the Code emphasizes clear definitions of responsibilities, as well as the allocation of resources, across different levels within the organization:

  • Risk oversight: Overseeing the organization’s risk assessment and mitigation activities.
  • Risk ownership: Managing systemic risks stemming from GPAISRs.
  • Support and monitoring: Supporting and monitoring risk assessment and mitigation.
  • Assurance: Providing internal (and external, when necessary) assurance regarding the adequacy of activities related to risk assessment and mitigation.

The responsibilities are allocated across:

  • Supervisory management bodies
  • Management teams
  • Operational teams
  • Assurance providers, be they internal or external

Resource Allocation

In addition, the organization must allocate resources to those with management responsibilities, including:

  • Human Resources
  • Financial Resources
  • Access to information and knowledge
  • Computational resources

Promoting a Measured and Balanced Approach

It is also crucial how the leadership conducts itself. Signatories shall promote a healthy risk culture and take measures to ensure that actors who have been assigned responsibilities for managing systemic risk stemming from GPAISRs (pursuant to Measure II.10.1) take a measured and balanced approach to systemic risk, neither being inappropriately risk-seeking, nor risk-ignorant, nor risk-averse, as relevant to the level of systemic risk stemming from the Signatories’ GPAISRs.

The ultimate goals to strive for include a work environment with open communication and sensible incentives:

  • Setting the tone with regards to a healthy systemic risk culture from the top;
  • Allowing effective communication and challenge to decisions concerning systemic risk;
  • Appropriate incentives to discourage excessive systemic risk-taking, such as rewards for cautious behavior and internal flagging of systemic risks;

Ideally, these efforts should lead to staff that feel comfortable communicating potential problems related to their work:

  • Anonymous surveys find that staff are aware of reporting channels, are comfortable raising concerns about systemic risks, understand the Signatory’s framework, and feel comfortable speaking up; or
  • Internal reporting channels are actively used and reports are acted upon appropriately.

What are the critical requirements for reporting and addressing serious incidents involving GPAISRs?

As the EU’s AI Act nears enforcement, the spotlight is turning to incident reporting for General-Purpose AI models with Systemic Risk (GPAISRs). Here’s a rundown of key requirements, pulled directly from the latest draft of the AI Code of Practice:

Comprehensive Incident Tracking

GPAISR providers must establish robust processes for tracking, documenting, and reporting serious incidents to the AI Office (and potentially national authorities) without undue delay. These processes need sufficient resourcing, relative to the gravity of the incident and their model’s involvement. Methods for identifying serious incidents should align with their business models.

Essential Data to Report

Documentation must encompass relevant details, including:

  • Start and end dates (or best approximations)
  • Resulting harm and affected parties
  • The chain of events leading to the incident
  • The specific model version involved
  • Description of the GPAISR’s involvement
  • Intended or enacted responses
  • Recommendations for the AI Office and national authorities
  • A root cause analysis, detailing outputs and contributing factors
  • Any known near-misses

Escalation and Notification Timeframes

The Code specifies strict deadlines for reporting incidents depending on the severity:

  • Critical Infrastructure Disruption: Immediate notification, no later than 2 days
  • Grave Physical Harm: Immediate notification, no later than 10 days
  • Fundamental Rights Infringements, Property/Environmental Damage: Immediate notification, no later than 15 days
  • Cybersecurity Incidents, Model Exfiltration: Immediate notification, no later than 5 days

Initial reports must cover core information. Intermediate reports, detailing progress every 4 weeks until resolution, are required. A final, comprehensive report is due no later than 60 days after the incident’s resolution. Companies must also decide whether to submit individual reports or consolidated reports when multiple incidents occur.

Proactive Documentation and Retention

Maintain meticulous documentation of all relevant data for at least 36 months from either the date of documentation or the date of the reported serious incident involving the general-purpose AI model, whichever is later.

Corrective Measures

Signatories are expected to have clearly defined, scalable resolution and communication processes. These should be able to apply necessary technical risk mitigation when GPAISR incidents happen—or are foreseen.

In short, transparency and documentation are key. These measures aim to create accountability around systemic risks, while promoting cross-stakeholder collaborations for GPAISR governance.

What are the obligations regarding non-retaliation protections for workers and how to inform them?

The AI Act emphasizes non-retaliation protections for workers who report potential systemic risks associated with general-purpose AI models that may be classified as having systemic risk (GPAISRs).

Core Obligations

Signatories of the General-Purpose AI Code of Practice commit to the following:

  • Non-Retaliation: Refrain from any retaliatory actions against workers who provide information regarding systemic risks stemming from the company’s GPAISRs. This applies if the information is shared with the AI Office or national competent authorities.
  • Annual Notification: Inform workers at least annually about the existence of an AI Office mailbox (if one exists) designated for receiving information related to systemic risks.

Important Considerations

Compliance with non-retaliation commitments should not be interpreted to supersede Union laws on copyright and related rights. In cases with general-purpose AI models with systemic risk (GPAISRs) it is very important to foster further analysis with the AI Office.

This commitment aims to foster transparency and accountability by ensuring that individuals within organizations can raise concerns about AI safety without fear of reprisal.

What are the crucial aspects that must be detailed by the model to the AI office for the model to meet the requirements of the Code?

For AI models to meet the Code’s requirements, providers must furnish the AI Office with comprehensive details. These encompass:

Transparency and Documentation

Signatories need to provide user-friendly model documentation, potentially using a Model Documentation Form. This includes:

  • General model information (e.g., model name, version, release date)
  • Details on model properties (architecture, input/output modalities, size)
  • Information on distribution channels and licensing
  • Acceptable use policies and intended uses
  • Specifications for training processes and data used (including measures to detect harmful content and biases).
  • Computational resources utilized during training.
  • Additional information for general-purpose AI models with systemic risk, such as evaluation strategies, adversarial testing results, and system architecture details.

Information should be proactively submitted for downstream AI providers, but only upon request for the AI Office and national competent authorities, ensuring proper legal basis and necessity.

All shared data requires rigorous adherence to confidentiality obligations and trade secret protections as underlined in Article 78.

Systemic Risk Management (Applicable to GPAISRs)

For models deemed to have systemic risk, a comprehensive Safety and Security Framework must be presented, detailing:

  • Systemic risk acceptance criteria, including risk tiers defined by measurable characteristics (model capabilties, propensities, harmful outcomes).
  • Detailed systemic risk assessments throughout the model lifecycle.
  • Technical and governance risk mitigation measures.
  • Periodic adequacy assessments of the Framework to assess effectiveness and improve over time.
  • Safety and Security Model Reports: documenting risk assessment, mitigation results, and decision-making. The Model Report details should justify decisions to release the model.

Serious Incident Reporting

Establish comprehensive processes for:

  • Tracking and documenting relevant information about serious incidents, covering aspects from incident start/end dates to root cause analysis and corrective measures.
  • Reporting to the EU AI Office without undue delay, with timelines sensitive to incident severity.

Transparency of Processes

The model needs to provide:

  • A description of decision-making (external or internal).
  • Qualifications, levels of access and resources for internal and external model evaluation teams.
  • Collaboration with other stakeholders in AI value chain.
  • Adequacy and non-retaliation protections to worker providing feedback to the AI-office

Further Compliance Aspects

Additionally, several key notification requirements must be followed to ensure that the AI office has adequate insights into the models developed by the firm:

  • Proactive notification about qualifying models, even if destined for internal use.
  • Timely updates on framework changes and independent assessments.
  • Transparency re: sharing safety tools and best practices with the broader AI Community.

Providers must allocate appropriate resources to managing systemic risk. This includes ensuring a healthy risk culture with clear responsibilities across the organization.

These obligations are supplemented by implementing the measures found in the Transparency, Copyright or Safety and Security section in separate accompanying documents from the code.

What are the defined processes for regular and urgent Code updates?

The proposed General-Purpose AI Code of Practice recognizes the need for agility in AI governance. As AI technology continues to evolve, the document outlines mechanisms for regular review, adaptation, and even emergency updates to the Code. This ensures that its provisions remain proportionate to the assessed risks and technologically relevant.

Regular Review and Adaptation

The Code proposes a regular review cycle, occurring every two years. This thorough process, to be encouraged by the AI Office, allows for a comprehensive overhaul of the code. This review is designed to be adaptable to current AI best practices, international approaches, and developing industry standards.

After each review, the AI Office will confirm adequacy of the Code for Signatories.

Ongoing Implementation Support

Acknowledging the importance of continuous clarification, the document allows space for ongoing implementation support via guidance from the AI Office. As stated in the Preamble of the Code, this guidance ensures consistency between existing protocols, real-world practices, and provisions under Article 96, AI Act.

Emergency Updates

Significantly, the documentation alludes to mechanisms for emergency code updates. Triggered by “an imminent threat of large-scale irreversible harm,” these updates would be issued swiftly to mitigate negative effects.
In addition to all steps and requirements outlined above, it is recommended that:

  • Emergency updates shall be subject to review by the AI Office to confirm prevention of large-scale irreversible harm.
  • The AI office actively invites stakeholder input on the mechanism for these updates and suggestions for what suitable fora for the enactment of emergency updates to the Code.

Ultimately, this framework seeks to translate high-level principles into concrete actions for AI developers. By prioritizing transparency, ethical considerations, and robust safety measures, this initiative aims to foster responsible innovation in the rapidly evolving landscape of general-purpose AI. Moving forward, success hinges on vigilant monitoring, collaborative adaptation, and a commitment to safeguarding fundamental rights while harnessing the transformative potential of these powerful technologies.

More Insights

Tariffs and the EU AI Act: Impacts on the Future of AI Innovation

The article discusses the complex impact of tariffs and the EU AI Act on the advancement of AI and automation, highlighting how tariffs can both hinder and potentially catalyze innovation. It...

Europe’s Ambitious AI Sovereignty Action Plan

The European Commission has unveiled its AI Continent Action Plan, a comprehensive strategy aimed at establishing Europe as a leader in artificial intelligence. This plan emphasizes investment in AI...

Balancing Innovation and Regulation in Singapore’s AI Landscape

Singapore is unveiling its National AI Strategy 2.0, positioning itself as an innovator and regulator in the field of artificial intelligence. However, challenges such as data privacy and AI bias loom...

Ethical AI Strategies for Financial Innovation

Lexy Kassan discusses the essential components of responsible AI, emphasizing the need for regulatory compliance and ethical implementation within the FinTech sector. She highlights the EU AI Act's...

Empowering Humanity Through Ethical AI

Human-Centered AI (HCAI) emphasizes the design of AI systems that prioritize human values, well-being, and trust, acting as augmentative tools rather than replacements. This approach is crucial for...

AI Safeguards: A Step-by-Step Guide to Building Robust Defenses

As AI becomes more powerful, protecting against its misuse is critical. This requires well-designed "safeguards" – technical and procedural interventions to prevent harmful outcomes. Research outlines...

EU AI Act: Pioneering Regulation for a Safer AI Future

The EU AI Act, introduced as the world's first major regulatory framework for artificial intelligence, aims to create a uniform legal regime across all EU member states while ensuring citizen safety...

EU’s Ambitious AI Continent Action Plan Unveiled

On April 9, 2025, the European Commission adopted the AI Continent Action Plan, aiming to transform the EU into a global leader in AI by fostering innovation and ensuring trustworthy AI. The plan...

Updated AI Contractual Clauses: A New Framework for Public Procurement

The EU's Community of Practice on Public Procurement of AI has published updated non-binding AI Model Contractual Clauses (MCC-AI) to assist public organizations in procuring AI systems. These...