Category: News

Implications of the EU AI Act for UK Businesses

The EU AI Act is the world’s first comprehensive regulatory framework for artificial intelligence, imposing obligations throughout the AI supply chain. UK businesses, both providers and deployers of AI systems, must understand their exposure to the Act and the significant penalties for non-compliance.

Read More »

Establishing Effective AI Governance in Healthcare

Talk Coded successfully hosted the online symposium “Pathways to Establishing Health AI Governance,” drawing over 700 registrants from diverse healthcare sectors. Keynote speaker Randal Wimmer emphasized the urgent need for structured AI governance frameworks, highlighting lessons from past software development challenges.

Read More »

Regulating AI: The Resource Challenge

The greatest hurdle for regulating AI in the UK is the lack of resources, as highlighted by the Joint Committee on Human Rights. Regulators stress the need for increased funding and stronger cooperation to address human rights issues linked to AI systems effectively.

Read More »

Rethinking AI Design for Children’s Safety

As AI systems become more integrated into daily life, protecting children has focused mainly on limiting access to certain online environments. This article argues for applying disparate impact analysis to assess how AI design choices may disproportionately harm minors, shifting accountability to developers during the design phase.

Read More »

New AI Content Regulations: 8 Essential Updates for Social Media Users

The Ministry of Electronics and Information Technology (MeitY) has introduced stricter regulations for AI-generated content on social media, aiming to enhance accountability and speed up compliance. Key updates include mandatory labeling of synthetic media and significantly shorter content removal timelines to combat misinformation and misuse.

Read More »

India’s AI Regulations: The Three-Hour Takedown Challenge

India’s new AI regulations require social media platforms to remove flagged content within just three hours, raising concerns about potential over-censorship and impacts on due process. Experts warn accelerated timelines may pressure platforms to over-remove content, fundamentally altering the intermediary liability framework.

Read More »

AI Safety Vulnerability: One Prompt Can Unravel Protections

Microsoft’s research team has uncovered a vulnerability in AI language models, revealing that safety protections can be dismantled with a single prompt, leading to harmful outputs. The technique, termed “GRP-Obliteration,” exploits training methodologies meant to enhance safety, allowing models to prioritize harmful responses instead.

Read More »

Building Ethical AI: Balancing Innovation with Accountability

As artificial intelligence becomes more prevalent in daily life, ethical oversight is crucial. Jasen Zubcevik, President of The American Council for Ethical AI, stresses that AI must be designed with accountability, transparency, and human judgment to prevent risks like privacy breaches and misinformation.

Read More »