Day: June 3, 2025

AI’s Rise: Addressing Governance Gaps and Insider Threats

This year’s RSAC Conference highlighted the pervasive influence of artificial intelligence (AI) in cybersecurity discussions, with nearly 90% of organizations adopting generative AI for security purposes. However, the conference also raised concerns about the growing risks associated with AI, including governance gaps and insider threats within organizations.

Read More »

Ensuring AI Compliance Amidst Data Proliferation

The podcast discusses the compliance risks associated with data during artificial intelligence (AI) processing, emphasizing the challenges of managing proliferating datasets. Mathieu Gorge, CEO of Vigitrust, highlights the importance of understanding data flow and maintaining compliance as organizations increasingly adopt AI technologies.

Read More »

Embedding Responsible AI: From Principles to Practice

In the pursuit of Responsible AI, organizations often struggle to translate ethical principles into practical applications, leading to performative actions rather than meaningful change. To embed these values effectively, companies must focus on governance, operationalization, and creating incentives that align ethical accountability with their AI strategies.

Read More »

Kickstarting Compliance with the EU AI Act: Four Essential Steps

The European Union’s Artificial Intelligence Act (AI Act) is the world’s first comprehensive regulation on AI, impacting not only European entities but also U.S.-based organizations that develop or use AI technologies. Companies must prepare for compliance by assessing their AI systems against the Act’s risk categories and implementing necessary governance measures.

Read More »

Urgent Call for Global AI Human Rights Framework

New Zealand’s Chief Human Rights Commissioner, Stephen Laurence Rainbow, emphasized the urgent need for a global framework to address the human rights implications of artificial intelligence during an international conference in Doha. He highlighted the importance of discussing both the challenges and opportunities presented by AI, as well as the essential role of human rights organizations in navigating these emerging issues.

Read More »

GOP’s Bold Move to Ban State AI Regulations Sparks Controversy

House Republicans have proposed a ban on state regulations regarding artificial intelligence (AI), arguing that a unified federal standard is necessary to avoid confusion for technology companies. The proposal has sparked significant debate among lawmakers and the tech community about its potential implications for AI development and consumer protections.

Read More »

Blueprint for Effective AI and Social Media Regulation

The Take It Down Act demonstrates that targeted regulation of AI can be achieved without stifling innovation, successfully addressing online harms to children. With bipartisan support and backing from major tech companies, the law criminalizes the publication of nonconsensual intimate images online, requiring platforms to act swiftly in removing such content.

Read More »

Unpacking the EU’s AI Act: Challenges and Compliance in Healthcare

During the AI Health Law & Policy Summit, panelists discussed the complexities of the EU’s AI Act and the challenges of global regulatory compliance for AI-enabled medical products. Experts emphasized the importance of proactive engagement with regulatory bodies and the need for companies to adapt their governance frameworks to meet evolving compliance requirements.

Read More »

Congress’ Hidden AI Regulation Ban: A Decade of Unchecked Power

The letter expresses concern over a clause in H.R. 1 that would prohibit state or local governments from regulating artificial intelligence for the next 10 years. It warns that this moratorium could allow unelected officials to deploy AI systems without public accountability, raising significant risks for future administrations.

Read More »