Category: AI Accountability

New Product Liability Challenges for AI Innovations

The new EU Product Liability Directive 2024/2853, which came into force on December 8, 2024, significantly modernizes product liability rules and explicitly includes software and AI-integrated products. Companies using AI in their products must be aware that they can be held liable for damages caused by software defects, including issues arising from insufficient updates or cybersecurity weaknesses.

Read More »

CJEU’s Inquiry into AI Act and Automated Decision-Making Challenges

On November 25, 2024, Bulgaria’s Sofia District Court requested a preliminary ruling from the CJEU regarding automated decision-making under the AI Act, citing concerns over transparency and fairness in a telecoms company’s fee calculation method. The court seeks clarification on 17 legal questions pertaining to consumer rights and the interpretation of Article 86(1) of the AI Act.

Read More »

EU Lawmaker Seeks Business Input on AI Liability Directive

EU lawmaker Axel Voss is consulting with businesses to assess the need for new liability rules for artificial intelligence as part of the upcoming AI Liability Directive. The directive aims to modernize existing regulations and address potential legal challenges posed by AI systems.

Read More »

Understanding AI Transparency: Building Trust in Technology

AI transparency refers to understanding how artificial intelligence systems make decisions and what data they use, essentially providing insight into their internal workings. It is crucial for building trust with users and stakeholders, particularly as AI becomes increasingly integrated into everyday business practices.

Read More »

Ensuring Responsibility in AI Development

AI accountability refers to the responsibility for bad outcomes resulting from artificial intelligence systems, which can be difficult to assign due to the complexity and opacity of these technologies. As AI systems are often criticized for being “black boxes,” understanding the decision-making process is essential for ensuring accountability and transparency.

Read More »

AI Accountability: Defining Responsibility in an Automated World

As Artificial Intelligence becomes increasingly integrated into our daily lives and business operations, the question of accountability for AI-driven decisions and actions gains prominence. Understanding who is responsible when AI goes wrong—be it users, managers, developers, or regulatory bodies—is essential for fostering trust and ensuring ethical practices in AI utilization.

Read More »

AI Accountability: Ensuring Trust in Technology

The AI Accountability Policy Report emphasizes the importance of establishing a framework for assessing the trustworthiness of AI systems and ensuring transparency in their operations. It highlights the collaborative efforts of the Biden-Harris Administration and various stakeholders to promote responsible AI development and address potential risks associated with AI technologies.

Read More »

A.I. Accountability: Defining Responsibility in Decision-Making

The article discusses the challenges of assigning accountability in artificial intelligence systems, emphasizing that as A.I. technologies become more prevalent, it is unclear who should be held responsible for poor decisions made by these systems. It advocates for shared accountability among developers, users, and organizations, supported by testing, oversight, and regulations to ensure responsible deployment.

Read More »

Ensuring Accountability in AI Systems

AI actors must be accountable for the proper functioning of AI systems and adhere to established principles, ensuring traceability throughout the AI system lifecycle. This includes applying a systematic risk management approach to address potential risks associated with AI, such as harmful bias and human rights concerns.

Read More »