Category: AI Ethics

Designing AI with Accountability in Mind

AI designers and developers are responsible for considering the design, development, decision processes, and outcomes of AI systems. Every individual involved in the creation of AI must account for its impact on the world and adhere to clear company policies regarding responsibility and accountability.

Read More »

Ensuring AI Accountability Through Risk Governance

This workshop-based exploratory study investigates accountability in Artificial Intelligence (AI) through risk governance. It identifies key challenges and characteristics necessary for effective AI risk management methodologies, aiming to bridge the gap between conceptual understanding and practical application in the industry.

Read More »

The Essential Principles of Responsible AI

Artificial Intelligence (AI) is increasingly influencing our lives, necessitating the incorporation of ethical principles to uphold human values in its design. The ART design principles—Accountability, Responsibility, and Transparency—are essential for the development of AI systems that are sensitive to these values.

Read More »

Accountability and Governance in AI: Key Considerations

The document discusses the accountability and governance implications of AI, emphasizing the importance of compliance with data protection laws for AI systems that process personal data. It highlights the necessity of conducting data protection impact assessments (DPIAs) to identify and manage risks associated with AI technologies.

Read More »

Trump’s AI Policy Shift and Europe’s Regulatory Response

President Donald Trump has repealed Joe Biden’s 2023 artificial intelligence regulations, marking a significant shift towards lighter regulations aimed at promoting growth and innovation. Meanwhile, France’s first minister of AI has criticized the EU AI Act, urging that regulations should facilitate rather than hinder innovation.

Read More »

AI Guidelines Must Prioritize Human Rights in the EU

The EU is working on new guidelines related to the AI Act, emphasizing that human rights should be the central guiding principle. Over two dozen organizations have called for these guidelines to address fundamental rights and clarify loopholes in the legislation, particularly concerning policing and migration.

Read More »

Upholding Human Rights in AI Regulation

The upcoming Commission guidelines on the AI Act implementation must prioritize human rights and justice to ensure meaningful enforcement of the legislation. The guidelines should clarify that all AI systems, including those deemed ‘simple,’ are within scope and establish prohibitions against the misuse of technology that threatens fundamental rights, particularly for marginalized groups.

Read More »

Protecting the Vulnerable: Insights from Article 5 of the EU AI Act

Article 5 (1)(b) of the EU AI Act focuses on protecting vulnerable groups from being exploited by AI systems, particularly those who may have limited ability to recognize or resist manipulation. This provision aims to ensure that industries such as advertising, healthcare, and finance design their AI systems responsibly to avoid targeting and manipulating individuals in vulnerable situations.

Read More »

Denmark’s AI Welfare System: Surveillance and Discrimination Unveiled

Amnesty International’s report highlights that Denmark’s welfare authority risks discriminating against marginalized groups through its use of AI tools for social benefits fraud investigations. The sweeping surveillance practices have eroded individual privacy and created an atmosphere of fear among those targeted by these algorithms.

Read More »

EU AI Act: First Draft Highlights Copyright Concerns

The European Union has released the first draft of a ‘General-Purpose AI Code of Practice’, aimed at ensuring meaningful implementation of the AI Act. The draft includes provisions on copyright and requires companies to conduct due diligence regarding copyright protected content when developing AI models.

Read More »