Category: AI

EU AI Act: Transforming Cybersecurity and Privacy Strategies

The EU AI Act introduces significant regulatory changes that impact cybersecurity and privacy teams. Organizations must prioritize governance, visibility, and proactive risk management to comply with the new standards while leveraging opportunities for enhanced AI development and deployment.

Read More »

Unpacking the AI Act’s Emotional Recognition Loophole

The article discusses the implications of the EU AI Act’s ban on emotion recognition technologies (ERTs), highlighting a potential loophole that allows for the identification of emotional expressions without inferring individuals’ emotional states. Despite recognizing the technical limitations of ERTs, the regulation may not adequately protect users from technologies that could be fully functional in the future.

Read More »

AI Ethics: Balancing Innovation and Responsibility

The rapid development of artificial intelligence (AI) technology is profoundly changing our society, bringing significant ethical challenges such as privacy protection, bias, and accountability. This paper explores these challenges while also analyzing the opportunities AI presents for promoting social equity and enhancing decision-making efficiency.

Read More »

AI Regulation: A Call for Accountability and Transparency

State Rep. Hubert Delany emphasizes the urgent need for AI regulation to ensure fairness, accountability, and transparency in systems that affect people’s lives. He supports Senate Bill 2, which aims to establish human oversight and prevent discrimination in AI decision-making processes.

Read More »

EIOPA’s Insights on AI Governance in Insurance

On February 12, 2025, the European Insurance and Occupational Pensions Authority (EIOPA) published a consultation on its draft opinion regarding artificial intelligence (AI) governance and risk management. The Opinion provides guidance for insurance undertakings on the responsible use of AI systems in the insurance value chain, emphasizing the importance of proportionality in governance and risk management measures.

Read More »

Harnessing AI for Global Good

At SXSW 2025, Dr. Rumman Chowdhury emphasized the importance of viewing artificial intelligence through a diverse lens and highlighted the need for responsible AI practices that empower users. She advocates for a shift from passive acceptance of technology to active participation, urging for systems that allow individuals to make informed choices about the algorithms that affect their lives.

Read More »

Industry Concerns Mount Over EU’s Draft AI Code

The draft Code of Practice on General-Purpose Artificial Intelligence (GPAI) aims to assist AI companies in complying with the EU’s AI Act, focusing on transparency, copyright, and risk assessment. However, the tech industry has raised significant concerns about the draft’s burdensome requirements and its potential impact on innovation.

Read More »

Revised Guidelines for Copyright Compliance in AI Models

The GPAI Code of Practice outlines the requirements for compliance with EU copyright law for providers of General-Purpose AI models. The third draft emphasizes proportional compliance based on the provider’s size and capacities, with significant changes to copyright policy measures compared to the previous draft.

Read More »