Category: AI

Understanding AI Literacy Obligations Under the EU AI Act

On May 7, 2025, the European Commission published a Q&A detailing the AI literacy obligations under the EU AI Act, clarifying the measures entities must take to ensure compliance. The document emphasizes the importance of training and understanding AI systems, as well as the necessity for entities to document their compliance measures.

Read More »

AI Regulation: Strategies for Ethical Innovation

The panel discussion at the AI Agenda conference highlighted the evolving landscape of global AI regulation, noting the diverging approaches taken by the EU, UK, and US. As countries navigate the balance between innovation and regulatory frameworks, the focus remains on accountability, transparency, and the need for human oversight in AI governance.

Read More »

AI Governance Takes Center Stage at G7 Summit

World leaders at the upcoming G7 Summit are expected to discuss how to harness the potential of artificial intelligence while establishing proper governance standards. Economic experts highlight the need to address equity issues and support those who may be disrupted by rapid advancements in AI technology.

Read More »

Deregulation Threatens the Future of the AI Act

The EU’s Artificial Intelligence Act, adopted in August 2024, faces significant challenges as the European Commission seeks to simplify regulatory obligations, raising concerns about potential weakening of its provisions. The push for deregulation has sparked backlash from civil society and emphasizes the need for careful consideration of any amendments to ensure the protection of fundamental rights.

Read More »

Regulating AI: Balancing Innovation and Oversight

The EU has adopted the world’s first comprehensive AI regulation, known as the Artificial Intelligence Act, which aims to balance innovation and safety by categorizing AI applications based on risk levels. However, there are concerns that this regulation could stifle competitiveness and push companies to avoid the European market due to potential fines for non-compliance.

Read More »

AI Trust: Balancing Benefits and Risks in a Complex Landscape

A global study reveals that while 66% of people regularly use AI and see its benefits, only 46% are willing to trust it, highlighting a significant trust gap. The study emphasizes the need for improved AI literacy and governance to address public concerns and foster acceptance of AI technologies.

Read More »

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU’s AI Act setting stringent requirements based on risk levels, while Australia adopts a phased approach with proposed guardrails for high-risk AI. Understanding these regulations is essential for compliance and can provide a competitive edge in the tech industry.

Read More »

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and promote discussions on AI governance while providing practical recommendations for implementation.

Read More »

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU’s AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of understanding AI systems, recognizing risks, and ensuring compliance to avoid potential liabilities.

Read More »

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The draft AI Market Surveillance and Innovation Promotion Act aims to enhance existing regulatory frameworks and provide businesses with clear guidance on AI governance.

Read More »