Day: May 1, 2025

Regulating AI Chatbots: A Call for Clearer Guidelines

The Molly Rose Foundation has criticized Ofcom for its unclear response to the regulation of AI chatbots, which may pose significant risks to public safety. The charity’s CEO emphasized the urgent need for tighter regulations under the Online Safety Act to protect individuals from poorly regulated AI technologies.

Read More »

Bridging Divides in AI Safety Dialogue

Despite numerous AI governance events, a comprehensive framework for AI safety has yet to be established, highlighting the need for focused dialogue among stakeholders. A dual-track approach that combines broad discussions with specialized dialogue groups could foster consensus and address context-specific risks effectively.

Read More »

Empowering Security Teams in the Era of AI Agents

Microsoft Security VP Vasu Jakkal emphasized the importance of governance and diversity in the evolving landscape of cybersecurity, particularly with the rise of agentic AI. As organizations adopt more autonomous AI tools, Jakkal stated that cybersecurity professionals must enhance their AI skills to remain relevant and effective.

Read More »

Understanding ISO 42001: A Framework for Responsible AI

ISO 42001 is the world’s first international standard dedicated to the management of Artificial Intelligence, focusing on governance, accountability, and lifecycle risk management. This new standard aims to help organizations build trustworthy and ethical AI systems that meet legal requirements and societal expectations.

Read More »

EU Strategies for Defining AI Act Regulations on General-Purpose AI

EU policymakers are considering setting threshold measures of computational resources to help businesses determine the regulatory requirements for AI models they train or modify under the EU AI Act. The proposals aim to clarify the scope of rules applicable to general-purpose AI models, with feedback from stakeholders expected to shape new guidelines effective from August 2025.

Read More »

AI Regulation: Building Trust in an Evolving Landscape

As AI adoption accelerates globally, governments are rapidly developing ethical and legal frameworks to ensure compliance and mitigate risks associated with AI technologies. The EU’s AI Act and other regulatory measures in countries like the US, India, and China signify that AI governance is becoming essential for businesses to maintain a competitive edge.

Read More »

Global Standards for AI in Healthcare: A WHO Initiative

The World Health Organization (WHO) has launched a global initiative to establish a unified governance framework for artificial intelligence (AI) in healthcare, focusing on safety, ethics, and accessibility. This initiative aims to support low- and middle-income countries in effectively integrating AI into their health systems while addressing ethical concerns and regulatory challenges.

Read More »

AI Adoption and Trust: Bridging the Governance Gap

A recent KPMG study reveals that while 70% of U.S. workers are eager to leverage AI’s benefits, 75% remain concerned about potential negative outcomes, leading to low trust in AI. Nearly half of employees are using AI tools without proper authorization, highlighting significant gaps in governance and raising ethical concerns.

Read More »

AI Regulation: China’s Blueprint for Global Governance

The article discusses how the Global South, particularly China, is making significant strides in the development and regulation of artificial intelligence (AI), challenging the notion that these countries have a passive role in global affairs. It emphasizes the importance of proactive regulation to balance innovation with ethical considerations and highlights the potential for cross-border cooperation among nations in the Global South to navigate AI’s challenges.

Read More »