AI Governance in the Age of Regulation: Preparing for the AI Act

Artificial intelligence systems are rapidly transforming industries, promising increased efficiency and innovative solutions. However, the widespread adoption of AI also brings significant challenges, particularly concerning ethical considerations, data privacy, and potential societal impacts. New regulations are emerging to address these concerns head-on, forcing organizations to adapt and ensure their AI practices are responsible and compliant. This investigation delves into the intricate web of AI governance, offering crucial insights for businesses striving to harness the power of AI while mitigating its inherent risks and exploring the impact and changes to the audit function.

What is the core objective of the AI Act legislation?

The primary goal of the EU’s AI Act is to safeguard fundamental rights and personal data in the context of artificial intelligence. At the same time, the legislation aims to promote innovation and foster trust in AI technologies across the European Union.

Key regulatory concerns addressed by the AI Act:

  • Ethical Considerations: Ensuring AI systems are developed and used in a non-discriminatory manner, promoting equality, and encouraging cultural diversity.
  • Risk Management: Categorizing AI systems based on their level of risk, with obligations and requirements increasing accordingly. This ranges from minimal risk systems with no specific requirements to unacceptable risk systems that are prohibited.
  • Transparency: Requiring transparency in the use of AI, particularly for systems that generate synthetic content or interact with individuals.

Practical Implications for Organizations:

  • Compliance Projects: Viewing AI Act compliance as a project similar to others in terms of risk assessment, process audits, and governance evaluation.
  • AI Literacy: Ensuring a sufficient level of AI understanding among staff dealing with AI systems.
  • Inventory and Classification: Maintaining an up-to-date inventory of AI systems, classified according to the AI Act’s risk categories.
  • Role Awareness: Understanding the organization’s role in the AI value chain (e.g., provider, deployer, distributor) as requirements vary based on this role. A deployer can, through modifications of an AI system, can become a provider, which triggers different requirements based on this new role.

How can organizations prepare to fulfill compliance requirements under the AI Act?

The EU’s AI Act, now in force, presents a tiered risk-based approach impacting organizations deploying AI systems within the European market. Businesses need to proactively prepare for a phased implementation, adjusting strategies based on their specific role in the AI value chain and the risk level associated with their AI systems.

Key Steps for Preparation

Organizations can approach AI Act compliance as a standard compliance project, focusing on process and governance. Here’s a roadmap:

  • AI Literacy: Ensure staff interacting with AI systems possess adequate understanding.
  • AI Inventory: Compile a comprehensive list of all AI systems used within the organization and its subsidiaries.
  • Risk Classification: Categorize AI systems according to the AI Act’s risk categories, understanding that these are legal definitions.
  • Prohibited Systems: Immediately cease the use of AI systems deemed to pose an “unacceptable risk.” Remove such AI systems from the EU market.
  • Policy Implementation: Establish robust policies to properly evaluate future AI systems.

Navigating the Timelines

The AI Act implementation features staged deadlines, each introducing specific compliance obligations. Here’s a simplified breakdown for internal auditors to guide their organization’s preparation:

  • February 2, 2025: Restrictions on prohibited AI systems begin. Deployers need to cease use, and providers must remove these from the EU market
  • August 2, 2025: Regulations regarding General Purpose AI (GPAI) models and public governance/enforcement come into play. Providers of GPAI with systemic risk must notify the Commission and implement compliance policies. Both Providers and deployers need appropriate transparency mechanisms.
  • August 2, 2026: Most of the AI Act applies (except Article 6(1)). Providers and deployers should establish risk assessment, risk management, and accountability systems for high-risk models and put transparency policies for limited risk AI systems in place.
  • August 2, 2027: Article 6(1) applies. GPAI measures established in 2025 extend to all systems. Providers of AI-component products (per Chapter 6(1)) must ensure compliance with obligations for high-risk AI.

Obligations Based on Risk and Role

Compliance requirements vary based on the AI system’s risk category and the organization’s role within the AI value chain. Key roles include provider, deployer, distributor, importer, and authorized representative.

Internal auditors should evaluate compliance across the entire value chain and be particularly vigilant about changes in roles. A deployer might become a provider if they significantly modify an AI system or market it under their own trademark, thus triggering stricter compliance obligations.

Transparency and Documentation

Providers and deployers of GPAI systems must clearly mark AI-generated content (e.g., images, deepfakes, text) with machine-readable indicators. They must also provide information for deployers with the capabilities and limitations of the model and publicly share a summary of the content used for training.

Technical documentation of the model, its training and testing process, and the results of its evaluation must be drawn up and kept up to date.

What are the key obligations and requirements for entities under the AI Act?

The EU’s AI Act introduces a tiered approach to regulating AI, categorizing systems based on risk levels: unacceptable, high, limited, and minimal. Obligations for organizations vary significantly based on this classification and their role in the AI value chain as providers (those who develop and place AI systems on the market), deployers (those who use the AI systems), or other roles such as importers and distributors.

Key Obligations Based on Risk

  • Unacceptable Risk AI Systems: These are banned outright. Examples include AI systems that manipulate human behavior to cause harm, or those involved in social scoring or biometric identification in public spaces.
  • High-Risk AI Systems: These face the most stringent requirements. This category includes AI used in critical infrastructure, education, employment, law enforcement, and essential services like insurance and credit scoring. Key obligations include:
    • Establishing and maintaining a risk management system throughout the AI system’s lifecycle.
    • Adhering to strict data governance standards, ensuring the quality and minimizing bias in training, validation, and testing datasets.
    • Developing comprehensive technical documentation before deployment.
    • Maintaining detailed records (logs) for traceability.
    • Providing deployers with transparent information to understand and appropriately use the AI’s output.
    • Implementing human oversight mechanisms.
    • Ensuring accuracy, robustness, and cybersecurity.
    • Establishing a quality management system to ensure ongoing compliance.
    • Cooperating with authorities and demonstrating compliance upon request.
    • Performing a conformity assessment and drawing up an EU declaration of conformity.
    • Affixing CE marking to demonstrate conformity which must be registered in the EU database before placing on the market.
    • Performing a fundamental rights impact assessment.
    • Establishing responsible and trained human beings to ensure proper use of oversight, competence, and authority.
    • Performing post-market monitoring using a documented plan.
    • Reporting serious incidents to surveillance authorities.
  • Limited Risk AI Systems: These are subject to transparency requirements. Users must be informed they are interacting with an AI system, especially for AI-generated content. This applies to AI systems generating synthetic audio, image, video or text content.
  • Minimal Risk AI Systems: No specific requirements are outlined for AI systems that pose a minimal risk.

General Purpose AI (GPAI) Models

The AI Act also addresses General Purpose AI (GPAI) models, which are trained on large datasets and can perform a wide range of tasks. Providers of GPAI must comply with transparency obligations and respect copyright laws. GPAI models are classified upon calculation and meeting requirements laid out within Art. 42a of the AI Act.

Obligations by Role

Providers (whether within or outside the EU) bear the brunt of the compliance burden. They are responsible for ensuring that their AI systems meet all relevant requirements before being placed on the EU market. Those not established in the EU must designate an authorized representative within the EU.

Deployers must use AI systems responsibly and in accordance with the provider’s instructions. This includes assigning human oversight, ensuring staff competence, and monitoring the AI system’s operation.

Implementation Timeline

The AI Act will be rolled out in stages. The following milestones are the most important:

  • February 2, 2025: Regulations on prohibited AI systems apply.
  • August 2, 2025: Regulations that apply to GPAI models and public bodies that enforce the AI act are set to come into effect.
  • August 2, 2026: Most provisions of the AI Act apply, except Article 6(1).
  • August 2, 2027: Article 6(1) applies, governing the classification of products with AI safety components as high-risk.

Organizations also need to consider how the AI Act interacts with existing and upcoming EU legislation, such as DORA and CSRD/CSDDD, particularly looking at third-party risks, environmental impacts, and cybersecurity concerns.

How do the obligations and requirements change according to the risk category of an AI system?

As a tech journalist covering AI governance, I’ve been neck-deep in the EU AI Act. Here’s the breakdown on how obligations shift depending on the risk level of an AI system, according to the AI Act:

Risk-Based Tiers

The AI Act employs a risk-based approach, meaning the regulatory burden scales with the potential harm an AI system could cause. Here’s how it works:

  • Unacceptable Risk: These AI systems are outright banned. Think AI that manipulates people to cause harm, enables discriminatory practices, or creates facial recognition databases. The list is in Article 5, so check it!
  • High Risk: This category faces the strictest requirements. These systems have a potential to cause significant harm, in areas like critical infrastructure, education, employment, law enforcement, decisions related to insurance, etc.

    High-risk systems need:

    • A risk management system.
    • Data governance and quality controls.
    • Technical documentation.
    • Record keeping (logs).
    • Transparency and clear information for deployers.
    • Human oversight mechanisms.
    • Accuracy, robustness, and cybersecurity standards.
    • A quality management system.
    • Keeping documentation for at least 10 years
    • Cooperation with competent authorities.
    • An EU declaration of conformity.
    • CE marking.
    • Pre-market conformity assessment.
    • registration in the EU database.
    • Post-market monitoring.
    • Reporting of serious incidents.
    • Fundamental rights impact assessment.
  • Limited Risk: For AI systems like chatbots, the primary focus is on transparency. Users should know they’re interacting with an AI. For content such as synthetic audio, images, video, or text providers must also provide a machine-readable making that it was artificially generated.
  • Minimal Risk: This includes things like AI-enabled video games or spam filters. There are no specific requirements under the AI Act.

General Purpose AI (GPAI)

The AI Act also addresses GPAI models. Transparency requirements are defined for providers and deployers, and GPAI models with systemic risk face even tougher scrutiny.
A GPAI model is considered a systemic risk when the amount of computing used for its training is greater than 10 25 FLOPs.

Key Takeaways for Compliance

For legal-tech pros advising clients, or for compliance officers within organizations:

  • Classification is Key: Understand how the AI Act classifies systems and conduct thorough risk assessments. The risk classification method is prescribed within the AI Act.
  • Documentation is Crucial: Maintain detailed records of your AI systems, their risk assessments, and the measures you’re taking to comply.
  • Transparency Builds Trust: Be upfront with users about when they’re interacting with AI.
  • Stay Updated: The AI Act is complex, and interpretations will evolve. Continuously monitor guidance from the European Commission and other regulatory bodies.

What are the distinctions between the various roles defined by the AI Act and their corresponding responsibilities?

The AI Act defines several key roles in the AI value chain, each with distinct responsibilities. Understanding these roles is crucial for compliance and risk management. These roles can also change over time, leading to new obligations for the organisation.

Key Roles and Responsibilities

  • Provider (EU): Develops AI systems or general-purpose AI models and places them on the EU market. They bear the most extensive compliance burden under the AI Act.
  • Provider (Outside EU): If located outside the EU, they can use an importer or distributor to place the AI model on the EU market.
  • Deployer: Uses the AI system, for example, by providing it to employees or making it available to customers. Their obligations are fewer but include ensuring proper use and adherence to the provider’s guidelines.
  • Authorised Representative: A person within the EU, mandated by the provider, to act on their behalf, serving as an intermediary between non-EU AI providers and European authorities/consumers.
  • Distributor: Helps place the AI model on the EU market.
  • Importer: Used by Providers to place the AI model on the EU market.

Internal auditors must determine the role their company plays for each AI system and be aware that these roles can evolve. A deployer can become a provider by making significant changes to the AI system or rebranding it, triggering new compliance requirements.

It’s also important to consider the whole value chain when assessing the AI process; auditors must consider whole value chain risks.

What is the implementation timeline for the regulations within the AI Act?

The AI Act’s regulations will be rolled out in phases. Here’s a breakdown of the key dates to mark on your compliance calendar:

  • August 1, 2024: The AI Act officially entered into force. Think of this as the starting gun—it’s time to get your AI governance strategy in motion.
  • February 2, 2025: Regulations pertaining to prohibited AI systems begin to apply. That means any AI deemed an “unacceptable risk” (violating fundamental EU rights and values) is banned. Time to audit your systems and ensure compliance.
  • August 2, 2025: Regulations for General Purpose AI (GPAI) models and the public governance/enforcement of the Act take effect. If you’re working with large AI models, expect increased scrutiny and compliance requirements.
  • August 2, 2026: Almost all the remaining parts of the AI Act come into force, excluding Article 6(1). This encompasses the bulk of the risk assessments, management, and accountability policies your organization needs to have in place for high-risk AI models. Now is the time to apply the AI regulation, and put transparency policies for limited risk AI systems in place.
  • August 2, 2027: Article 6(1) begins to apply. This governs the classification of products with AI safety components as high risk. Furthermore, GPAI measures from 2025 are now applied to all relevant systems.
  • Note: Providers of GPAI models already in use before August 2025 need to be compliant with the AI Act by/from August 2025.

What does Internal Audit need to consider when assessing the AI process?

Internal auditors are now facing the challenge and opportunity of assessing AI systems within their organizations, particularly in light of the EU AI Act. It’s no longer just about traditional financial risks but also about compliance, ethics, and societal impact.

Key Considerations for Internal Auditors:

  • Understanding the AI Landscape: Auditors need to develop a solid grasp of AI technologies. This includes the varying levels of autonomy in AI systems, their capabilities to generate outputs, and how they influence different environments. Challenge the organization on how it defines AI and ensures consistency across all business units.
  • Risk Categorization: Focus on how the organization classifies AI systems according to the AI Act’s risk categories (unacceptable, high, limited, minimal, and general-purpose AI). Understanding that these categories represent legal definitions, not just internal risk assessments, is crucial.
  • Roles and Responsibilities: Recognize the differing roles within the AI value chain (provider, deployer, distributor, importer, etc.) and the associated obligations under the AI Act. Determine how the organization’s role for each AI system affects compliance requirements. Keep in mind that these roles can shift over time, triggering new obligations.
  • Compliance as a Project: Treat AI Act compliance as a project with defined stages and milestones to ensure the organization is preparing in an orderly manner. Adapt high-level requirements to the organization’s specific context.
  • Risk Management and Accountability: Ensure risk assessment, risk management, and accountability systems are established for high-risk AI models. Scrutinize data governance practices for training, validation, and testing datasets, with an eye toward data quality and minimizing bias.
  • Transparency and Oversight: Evaluate transparency policies for limited-risk AI systems and guarantees on human oversight. Ascertain that providers of GPAI systems with systemic risk notify the commission and have appropriate compliance policies in place. Probe for transparency mechanisms for deployers and providers.

Furthermore, here’s a timeline that Internal Audit should keep in mind:

  • Feb 2, 2025 Make sure that proper staff AI Literacy training material is available, there is an inventory of AI systems, these have been risk classified, that AI systems with unacceptable risk have been banned and that there are policies in place to ensure future AI systems are evaluated appropriately.
  • Aug 2, 2025 Check transparency protocols of GPAI models. Make sure the organisation knows which regulatory and oversight bodies it will interact with. Providers of GPAI models with systemic risk needs to notify the commission.
  • Aug 2, 2026 Audit policies for appropriate risk management, risk assessment, and accountability systems for high risk models as well as assess policies for transparency.
  • Aug 2, 2027 Assure that the GPAI measures ready from 2025 are now applied to all systems. Providers of products with AI components need to ensure their products comply with the high-risk AI obligations.

What specific actions must be taken regarding AI by organizations?

As AI systems become more prevalent, organizations face increasing regulatory scrutiny, particularly with the enforcement of the EU AI Act. The specific actions required will depend on their role (Provider, Deployer, etc.) and the risk classification of their AI systems.

Key Action Areas:

  • Establish AI Literacy:

    Companies need to ensure that personnel dealing with AI systems possess a sufficient level of AI literacy to understand and manage associated risks and ensure compliance.

  • Maintain an AI Registry:

    Companies should create and maintain a comprehensive inventory of all AI systems used within the organization and its subsidiaries, categorizing them based on the AI Act’s risk classifications (unacceptable, high, limited, minimal, GPAI).

  • Perform AI Risk Assessments:

    Conduct thorough risk assessments on all AI systems, adhering to the risk classification method prescribed by the AI Act, not just traditional risk analysis.

  • Implement Risk Management Systems (High-Risk AI):

    For high-risk AI systems, establish, document, maintain, and implement a risk management system to manage reasonably foreseeable risks throughout the entire system lifecycle.

  • Ensure Data and Data Governance (High-Risk AI):

    Implement robust data governance practices for training, validation, and testing data sets, focusing on data quality and bias mitigation.

  • Prepare Technical Documentation (High-Risk AI):

    Draw up comprehensive technical documentation before the system is put into service, as stated in the AI Act’s appendix.

  • Implement Record Keeping (High-Risk AI):

    Maintain detailed records of events (logs) to ensure traceability of AI system functioning.

  • Ensure Transparency and Provision of Information (High-Risk AI):

    Provide deployers with transparent and understandable information about the AI’s output usage.

  • Implement Post Market Monitoring (High-Risk AI):

    Implement a plan that will collect relevant data on the performance of the Ai system for its lifetime.

  • Establish Human Oversight (High-Risk AI):

    Design AI systems with provisions for effective human oversight by natural persons.

  • Respect Copyright Laws:

    Copyright laws must be respected according to GDPR requirements.

  • Ensure Accuracy, Robustness, and Cybersecurity (High-Risk AI):

    Guarantee high standards of accuracy, robustness, and cybersecurity for consistent AI system performance.

  • Implement a Quality Management System (High-Risk AI):

    Implement a QMS to ensure compliance with the AI Act.

  • Transparency for Limited Risk AI:

    Inform end users that they are interacting with an AI system.

  • Mark AI Generated Content:

    Providers and Deployers need to mark AI content for examples images and videos

Companies must ensure conformity with the AI Act through internal controls or quality management system assessments, potentially involving notified bodies. Providers must create an EU declaration of conformity and affix CE marking. Deployers also have responsibilities, including ensuring responsible AI usage with the necessary training, competence, and authority.

Furthermore, if providers have reasons to suspect non-conformity with the AI Act, then they must take corrective actions.

What is the central purpose of the questionnaire and survey?

The questionnaire and survey detailed in this document serve a specific and critical purpose: to gauge the landscape of AI adoption and auditing practices within companies, particularly in the context of the EU AI Act. This information is crucial for understanding how prepared organizations are for the AI Act’s requirements and to identify potential gaps in their AI governance frameworks.

The survey aims to:

  • Understand AI Act application within organizations.
  • Assess current AI usage across various business processes.
  • Evaluate present internal auditing approaches toward AI systems.

Survey Details and Participant Profile

The survey included responses from over 40 companies, with a significant portion (39%) operating in the financial sector (banking, insurance) and a large majority (70%) having operations exclusively or also in Europe. Given that 70% of the respondent firms had small internal audit teams (less than 10 FTEs), with almost half of those firms lacking specialized IT auditors, a clear emphasis is placed on the heightened need for reinforcing IT capabilities within said internal auditing functions.

Key Actionable Findings

  • AI Adoption is Widespread: A substantial 57% of companies have already deployed or are actively implementing AI systems.
  • AI Literacy Gap: While a majority (85%) of respondents have a good or fair understanding of AI, a lower percentage (71%) understand AI auditing specifically, and only 56 percent reflect a clear view of the EU AI Act implying an overall need for heightened training on the EU AI Act so audit departments can perform AI assurance processes effectively.
  • Compliance Efforts Underway: 60% of the companies affected by the AI Act are initiating projects to ensure compliance with new regulatory requirements.
  • Internal Audit AI Usage Low: The majority (72%) of internal audit departments are not actively leveraging AI systems in their audit processes. When AI is used, it’s primarily for risk assessment (33%).
  • Generative AI Adoption: 64% of companies are using or implementing Generative AI systems, and 44% have internal regulations governing their development and usage.

This data highlights the need for organizations to enhance their compliance efforts, provide adequate training on the AI Act, and consider incorporating AI into their internal audit processes. For internal auditors, the survey underscores the importance of developing frameworks to assess and audit the use of AI within their organizations, deliver benefits, and effectively reduce risks.

What are the key findings regarding company AI adoption based on survey results?

A recent survey sheds light on the current state of AI adoption within companies, particularly in the context of the EU AI Act. The findings reveal pervasive AI usage, but also highlight areas of concern regarding compliance readiness and internal audit capabilities.

AI Adoption Rates and Focus Areas

  • Widespread Adoption: Approximately 57% of companies surveyed have either already deployed AI systems (39%) or have ongoing implementation projects (18%).
  • AI Act Awareness Needed: Although the majority are financially companies operating in Europe and therefore subject to the AI Act (60%), understanding of the EU AI Act requirements remains low (56%). This presents a need for dedicated training initiatives within organizations.
  • Generative AI Momentum: There is substantial interest in generative AI, with 64% of surveyed companies either using or implementing these systems. Some 44% of these companies have installed internal regulations dealing with the technology.
  • Process Support: Both standard and generative AI systems are primarily being deployed to support:
    • Customer Service, Sales and Marketing.
    • Business Intelligence and Analytics, Finance and Accounting.
    • IT and Cybersecurity (more prominent with Generative AI).

Key Compliance and Regulatory Concerns

  • Standardization Gaps: A significant portion of companies (72%) do not make leverage of AI for internal audits or activities.
  • Internal Audit readiness: Only 28% have defined a standard technological architecture for existing AI systems. In addition, 44% have implemented an internal regulation structure to oversee use, but 85% have reported a good or fair understanding of AI. But overall, only 56% of the respondents declared good to fair understanding of the AI Act. This reveals potential challenges in auditing AI systems without a robust structure to comply.

Practical Implications for Compliance Officers

  • Upskilling Imperative: There appears to be a consistent need to plan and deploy dedicated AI training activities for EU AI legislation. This is necessary to promote responsible and competent use.
  • Internal Audit Empowerment: Internal auditor skills for auditing AI are enhanced either through internal, or external, training. However, it is important to enhance those skills to continue to properly comply to the regulations.

What are the key insights from the survey regarding Internal Audit’s role with AI?

A recent survey sheds light on the current state and future direction of Internal Audit’s involvement with AI, particularly in the context of the EU AI Act. The findings reveal both opportunities and challenges for auditors as they navigate this evolving landscape.

AI Adoption and Understanding:

Key insights include:

  • Limited AI Usage in Auditing: A significant majority (72%) of Internal Audit departments are currently not leveraging AI systems for their audit activities.
  • Specific Use Cases: Among those using AI, risk assessment is the primary activity supported (33%).
  • Understanding AI Concepts: While most respondents report a good or fair understanding of general AI concepts (85%) and auditing AI systems (71%), understanding of the EU AI Act specifically is lower (56%). This signals a crucial need for targeted training.

Addressing Skills Gap:

The survey underscores the need for enhanced skills within audit departments:

  • Skills Development: Audit departments are primarily addressing AI auditing skills through internal/external training (57%) and knowledge sharing (29%), with limited dedicated hiring (14%).
  • Small Audit Teams: A substantial number of respondents (70%) indicate that their internal audit function comprises fewer than 10 full-time employees (FTEs). Almost half (48%) don’t have specialized IT auditors. Combined with the rapidly evolving technology, these data highlight the need to strengthen internal audit teams with IT skills.

The EU AI Act and Compliance:

The research reveals critical insights into the Act, compliance plans, and Internal Audit’s role:

  • Act Applicability: 60% of companies acknowledge that they will be subject to the new AI Act.
  • Compliance Projects: Just over half (53%) of relevant firms have either initiated or plan to start a compliance project to adhere to the new regulations.

Generative AI Insights:

Specific insights into the use of Generative AI include:

  • Adoption Stats: 64% of firms either use or plan to implement Generative AI systems.
  • Internal Regulations: 44% of respondents have internal regulations specifically for Generative AI.
  • Processes Supported: The most frequent application of AI and GenAI is in customer service, sales, marketing, and business intelligence, analytics, finance, and accountancy.

These findings underscore the need for Internal Audit functions to proactively develop AI audit frameworks and increase their understanding of the AI Act. As organizations race to adopt AI, auditors play a pivotal role in ensuring responsible AI usage and compliance.

The arrival of the AI Act signals a significant shift in the landscape, demanding that organizations evolve their approach to governance and compliance. While many businesses are already embracing AI, a clear understanding of the Act’s nuances, particularly within the regulatory realm, remains a challenge. It is crucial that companies invest time and resources in compliance, recognizing that the future success of AI depends on a foundation of ethical considerations, robust risk management, and transparent practices. Internal Audit must quickly develop the knowledge and tools necessary to provide crucial oversight, guiding their organizations towards responsible innovation within this transformative technological environment.

More Insights

Data Cards: Documenting Data for Transparent, Responsible AI

As AI systems become increasingly prevalent, documenting their data foundation is vital. "Data Cards"—structured summaries of datasets—promote transparency and responsible AI. These cards cover...

Understanding AI Safety Levels: Current Status and Future Implications

Artificial Intelligence Safety Levels (ASLs) categorize AI safety protocols into distinct stages, ranging from ASL-1 with minimal risk to ASL-4 where models may exhibit autonomous behaviors...

Understanding Compliance for Risky AI Systems in the Workplace

The EU AI Act is the first legislation globally to regulate AI based on risk levels, establishing obligations for businesses that supply or use AI systems in the EU. Employers must take proactive...

Strengthening Responsible AI in Global Networking

Infosys has collaborated with Linux Foundation Networking to advance Responsible AI principles and promote the adoption of domain-specific AI across global networks. The partnership includes...

AI Regulation: Balancing Innovation and Oversight

Amid the upcoming release of the draft enforcement ordinance of the "Artificial Intelligence (AI) Framework Act," experts emphasize the need for effective standards through industry communication to...

AI Deregulation: A Risky Gamble for Financial Markets

The article discusses the risks associated with AI deregulation in the U.S., particularly under President Trump's administration, which may leave financial institutions vulnerable to unchecked...

AI Cybersecurity: Essential Requirements for High-Risk Systems

The Artificial Intelligence Act (AI Act) is the first comprehensive legal framework for regulating AI, requiring high-risk AI systems to maintain a high level of cybersecurity to protect against...

Essential AI Training for Compliance with the EU-AI Act

The EU is mandating that companies developing or using artificial intelligence ensure their employees are adequately trained in AI skills, with penalties for non-compliance. IVAM is offering a 4-hour...

California’s No Robo Bosses Act: Regulating AI in Employment Decisions

A new bill in California, known as the “No Robo Bosses Act,” seeks to regulate the use of artificial intelligence in employment decisions, such as hiring and promotions. Proposed by State Senator...