Category: AI

Rethinking AI Regulation: Embracing Federalism Over Federal Preemption

The proposed ten-year moratorium on state and local regulation of AI aims to nullify existing state laws, but it undermines democratic values and the ability of states to tailor governance to specific needs. Instead of fostering innovation, this approach risks stifling essential regulatory experimentation necessary for responsible AI development.

Read More »

Singapore’s AI Strategy: Fostering Innovation and Trust

Singapore is committed to responsibly harnessing digital technology, as emphasized by Minister for Communications and Information, Josephine Teo, during the 2025 ATxSummit. The country aims to balance private ambition with public support to foster meaningful AI adoption across various sectors, ensuring inclusive growth and responsible innovation.

Read More »

Securing AI in Manufacturing: Mitigating Risks for Innovation

The integration of AI in manufacturing offers significant benefits, such as increased innovation and productivity, but also presents risks related to security and compliance. Organizations must adopt proactive governance strategies to mitigate these risks and ensure that AI technologies work effectively within their operations.

Read More »

AI’s Rise: Addressing Governance Gaps and Insider Threats

This year’s RSAC Conference highlighted the pervasive influence of artificial intelligence (AI) in cybersecurity discussions, with nearly 90% of organizations adopting generative AI for security purposes. However, the conference also raised concerns about the growing risks associated with AI, including governance gaps and insider threats within organizations.

Read More »

Ensuring AI Compliance Amidst Data Proliferation

The podcast discusses the compliance risks associated with data during artificial intelligence (AI) processing, emphasizing the challenges of managing proliferating datasets. Mathieu Gorge, CEO of Vigitrust, highlights the importance of understanding data flow and maintaining compliance as organizations increasingly adopt AI technologies.

Read More »

Embedding Responsible AI: From Principles to Practice

In the pursuit of Responsible AI, organizations often struggle to translate ethical principles into practical applications, leading to performative actions rather than meaningful change. To embed these values effectively, companies must focus on governance, operationalization, and creating incentives that align ethical accountability with their AI strategies.

Read More »

Kickstarting Compliance with the EU AI Act: Four Essential Steps

The European Union’s Artificial Intelligence Act (AI Act) is the world’s first comprehensive regulation on AI, impacting not only European entities but also U.S.-based organizations that develop or use AI technologies. Companies must prepare for compliance by assessing their AI systems against the Act’s risk categories and implementing necessary governance measures.

Read More »

Urgent Call for Global AI Human Rights Framework

New Zealand’s Chief Human Rights Commissioner, Stephen Laurence Rainbow, emphasized the urgent need for a global framework to address the human rights implications of artificial intelligence during an international conference in Doha. He highlighted the importance of discussing both the challenges and opportunities presented by AI, as well as the essential role of human rights organizations in navigating these emerging issues.

Read More »

GOP’s Bold Move to Ban State AI Regulations Sparks Controversy

House Republicans have proposed a ban on state regulations regarding artificial intelligence (AI), arguing that a unified federal standard is necessary to avoid confusion for technology companies. The proposal has sparked significant debate among lawmakers and the tech community about its potential implications for AI development and consumer protections.

Read More »

Blueprint for Effective AI and Social Media Regulation

The Take It Down Act demonstrates that targeted regulation of AI can be achieved without stifling innovation, successfully addressing online harms to children. With bipartisan support and backing from major tech companies, the law criminalizes the publication of nonconsensual intimate images online, requiring platforms to act swiftly in removing such content.

Read More »