Category: AI Regulation

CIBC Leads the Way in Responsible AI Adoption

The Canadian Imperial Bank of Commerce (CIBC) has taken a significant step in responsible artificial intelligence by signing the federal government’s voluntary code of conduct for generative AI. This commitment highlights CIBC’s dedication to ethical AI development and positions it as a leader in the banking sector’s adoption of innovative technologies.

Read More »

EU Unveils AI Action Plan to Accelerate Business Adoption

The European Commission is set to launch an AI action plan on April 9, aimed at enhancing the deployment of artificial intelligence tools by businesses. This plan will focus on five key areas, including infrastructure and data access, while also addressing the need to streamline existing regulations.

Read More »

Preparing for the EU AI Act: Strategies for Compliance

The EU AI Act introduces new regulatory requirements for the responsible use of artificial intelligence, aiming to protect society and build trust in technology. Companies must take proactive steps to ensure compliance, as the Act will be implemented in stages starting in February 2025, impacting various sectors significantly.

Read More »

AI Act Negotiators Warn of Fundamental Rights Oversight

Negotiators of the AI Act have expressed concerns regarding the neglect of fundamental rights in a key implementation document. A letter signed by leading MEPs and the Spanish minister highlights the importance of addressing these issues to ensure effective implementation.

Read More »

Northern Ireland’s AI Firms Face Stricter Regulations Amid EU Compliance

AI businesses in Northern Ireland are expected to face stricter regulations compared to those in the rest of the UK, as highlighted by Dr. Barry Scannell, an expert in AI law. This situation arises from the European Commission’s proposal to apply the EU AI Act in Northern Ireland post-Brexit, potentially creating a regulatory gap between Northern Ireland and the rest of the UK.

Read More »

MEPs Raise Alarm Over Easing AI Risk Regulations

A group of MEPs has expressed serious concerns to the European Commission about proposed changes to the AI code of practice, which would make risk assessments for fundamental rights and democracy voluntary for AI system providers. They argue that this shift undermines the core principles of the AI Act, potentially allowing discriminatory content and political interference in elections.

Read More »

Balancing Innovation and Regulation in AI Development

The article argues that proper AI regulation does not stifle innovation but rather fosters it by building user trust and ensuring safety. It emphasizes the need for guidelines that encourage responsible AI development while safeguarding consumer data and privacy.

Read More »

Integrating AI Governance into Company Policies

The post discusses the importance of structuring AI governance within organizations, highlighting a three-tier governance structure that includes an AI Safety Review Board and operational teams. It also offers practical strategies for implementing governance, such as leveraging existing frameworks and optimizing policy lengths to enhance compliance.

Read More »