Category: AI Accountability

Rethinking AI Regulation: Embracing Existing Laws

Virginia Governor Glenn Youngkin vetoed House Bill 2094, which aimed to establish a legal framework for AI, highlighting the sufficiency of existing laws in regulating AI-related issues. This decision reflects a growing trend among states to reconsider hasty AI regulations and emphasize existing legal protections for consumers.

Read More »

South Africa’s AI Policy: The Need for Accountability

South Africa’s national AI policy is viewed as a positive step, but experts are concerned about its lack of strict consequences for organizations that violate ethical AI practices. This could hinder accountability and compliance, potentially leading to negative implications for local businesses as they adopt AI technologies.

Read More »

Spain’s New Bill Mandates Labelling of AI-Generated Content

Spain’s government has approved a draft bill to regulate the ethical use of artificial intelligence, which includes mandatory labeling of AI-generated content. The bill aims to align national laws with EU regulations and prohibits harmful AI practices such as deepfakes and biometric recognition in public spaces.

Read More »

Reviving the AI Liability Directive: Challenges and Prospects

The AI Liability Directive (AILD) was proposed by the European Commission in September 2022 to introduce uniform rules for non-contractual civil claims related to AI. However, the AILD has faced stagnation and resistance in the legislative process, with recent indications of a potential withdrawal from consideration.

Read More »

New Product Liability Challenges for AI Innovations

The new EU Product Liability Directive 2024/2853, which came into force on December 8, 2024, significantly modernizes product liability rules and explicitly includes software and AI-integrated products. Companies using AI in their products must be aware that they can be held liable for damages caused by software defects, including issues arising from insufficient updates or cybersecurity weaknesses.

Read More »

CJEU’s Inquiry into AI Act and Automated Decision-Making Challenges

On November 25, 2024, Bulgaria’s Sofia District Court requested a preliminary ruling from the CJEU regarding automated decision-making under the AI Act, citing concerns over transparency and fairness in a telecoms company’s fee calculation method. The court seeks clarification on 17 legal questions pertaining to consumer rights and the interpretation of Article 86(1) of the AI Act.

Read More »

EU Lawmaker Seeks Business Input on AI Liability Directive

EU lawmaker Axel Voss is consulting with businesses to assess the need for new liability rules for artificial intelligence as part of the upcoming AI Liability Directive. The directive aims to modernize existing regulations and address potential legal challenges posed by AI systems.

Read More »

Understanding AI Transparency: Building Trust in Technology

AI transparency refers to understanding how artificial intelligence systems make decisions and what data they use, essentially providing insight into their internal workings. It is crucial for building trust with users and stakeholders, particularly as AI becomes increasingly integrated into everyday business practices.

Read More »

Ensuring Responsibility in AI Development

AI accountability refers to the responsibility for bad outcomes resulting from artificial intelligence systems, which can be difficult to assign due to the complexity and opacity of these technologies. As AI systems are often criticized for being “black boxes,” understanding the decision-making process is essential for ensuring accountability and transparency.

Read More »