Day: March 13, 2026

Regulating AI Toys: Ensuring Safety for Young Minds

A systemic study by the University of Cambridge urges tighter regulation of generative AI-enabled toys that interact with children and suggests introducing new safety kitemarks. The authors recommend that only developers adhering to specific guidelines should access generative AI models for toys, emphasizing the importance of recognizing the social and emotional aspects of early childhood development.

Read More »

Council Streamlines AI Regulations for Enhanced Innovation

The Council has agreed on a proposal to streamline rules regarding artificial intelligence (AI) as part of the EU’s “Omnibus VII” legislative package. This initiative aims to enhance legal certainty, support innovation, and ensure harmonised implementation across member states while addressing key ethical concerns related to AI practices.

Read More »

National AI Ethics Framework: Ensuring Safe and Responsible Innovation

The National Artificial Intelligence Ethics Framework has been issued to guide the safe and responsible development and deployment of AI systems, ensuring they are beneficial to individuals and society. It mandates safety, human oversight, and the prevention of harm, while also emphasizing fairness, transparency, and the need to mitigate biases in AI operations.

Read More »

Bridging the AI Identity Governance Gap

AI has created a 92% visibility gap in identity governance, undermining traditional control models as non-human identities proliferate within core systems. To address this, federated governance offers a unified control layer that encompasses both human and machine identities, ensuring accountability and policy-driven oversight across platforms.

Read More »

EU Lawmakers Advance Key Changes to AI Act

Members of the European Parliament have reached a preliminary agreement on amendments to the EU Artificial Intelligence Act, which will be reviewed before a scheduled vote in Brussels. The proposed changes include extending compliance deadlines for high-risk AI systems and banning non-consensual explicit deepfakes to enhance consumer protection and online safety for children.

Read More »

EU Commission’s New Powers over AI Models: What You Need to Know

The draft procedural rules for enforcing the AI Act would allow the European Commission to access and inspect general-purpose AI models, including their source code and hosting infrastructure. Additionally, the plans outline independence criteria for experts and provide a 14-day response period for providers regarding preliminary findings.

Read More »

Essential Board Questions to Mitigate AI Risks in Robotics

Robotics companies are rapidly scaling AI capabilities, but many boards struggle to keep pace with adequate oversight. Essential questions regarding model risk, data provenance, safety cases, and incident response plans are crucial for directors to protect enterprise value while fostering innovation.

Read More »