Category: AI Safety Regulations

AI’s High Risk in the Election Landscape

An internal briefing note for Canada’s election watchdog warns that the use of artificial intelligence poses a “high” risk for the ongoing election campaign. It highlights concerns about potential violations of the Elections Act, particularly related to disinformation and the use of AI tools to mislead voters.

Read More »

Understanding AI Safety Levels: Current Status and Future Implications

Artificial Intelligence Safety Levels (ASLs) categorize AI safety protocols into distinct stages, ranging from ASL-1 with minimal risk to ASL-4 where models may exhibit autonomous behaviors. Currently, we are at ASL-2, and there is an urgent need for regulations to address the potential risks associated with advancing AI capabilities.

Read More »

AI Accountability in Healthcare: Rethinking Safety and Ethics

The paper discusses the challenges of moral accountability and safety assurance in the use of artificial intelligence-based clinical tools in healthcare. It emphasizes the need to update our understanding of accountability due to the opaque decision-making processes of these systems and suggests involving AI developers in the assessment of patient harm.

Read More »

EU’s Historic AI Act: Balancing Innovation and Safety

The European Union has passed its first major Act to regulate artificial intelligence, aiming to ensure safety and protect fundamental rights. The legislation introduces a risk-based approach that categorizes AI systems and applies stricter regulations to those deemed high or unacceptable risk.

Read More »