Category: AI Governance

Governance Gaps in AI Surveillance Across the Asia-Pacific

The Asia-Pacific region is experiencing a rapid expansion of AI-powered surveillance technologies, especially from Chinese companies, yet lacks the governance frameworks to regulate their use effectively. This creates a significant risk as these technologies can be repurposed to consolidate political control and suppress dissent.

Read More »

Embedding AI in Financial Crime Prevention: Best Practices

Generative AI is rapidly gaining attention in the financial sector, prompting firms to integrate this technology responsibly into their anti-financial crime frameworks. Experts emphasize the importance of strong governance, transparency, and human oversight to ensure that AI models are effective and compliant with evolving regulations.

Read More »

Harnessing AI for Smarter Tax Solutions

Artificial intelligence (AI) is revolutionizing tax compliance by automating processes, improving accuracy, and enhancing efficiency in tax operations. With its ability to analyze large datasets, AI provides valuable insights that help businesses navigate the complexities of tax regulations and optimize their strategies.

Read More »

Rethinking AI Regulation: The Case for Entity-Based Governance

The paper discusses the debate on whether frontier AI regulation should focus on the core technology (AI models) or its uses, highlighting the challenges of both model-based and use-based approaches. It proposes an alternative approach: entity-based regulation that targets the large business entities developing powerful AI systems, aiming to better address the unique risks posed by frontier AI.

Read More »

Texas Implements Groundbreaking AI Governance Law

Texas has enacted the Responsible Artificial Intelligence Governance Act (TRAIGA), which will take effect on January 1, 2026, introducing new regulations for the use of AI by companies and government entities. The law includes provisions for prohibiting certain uses of AI, requiring disclosures in healthcare, and creating an AI regulatory “sandbox.”

Read More »

Ensuring Safe Adoption of Generative AI: The Role of Output Inspection

As organizations adopt generative AI tools like Zoom AI Companion and Microsoft Copilot, they must establish robust guardrails to manage permissions and usage policies. However, inspection of AI-generated content is crucial to ensure compliance and control over what the AI produces, closing the gap between policies and actual outputs.

Read More »

AI-Generated Band Sparks Controversy in Music Streaming

The Velvet Sundown, an AI-generated band, garnered millions of streams on Spotify before being exposed as a synthetic creation, igniting a debate on copyright and content regulation in the music industry. The incident highlights the lack of oversight regarding AI-generated content on streaming platforms and raises concerns about transparency and the impact on human musicians.

Read More »

Understanding Model Cards for Responsible AI Development

A model card report is a standardized document that provides transparency and accountability in AI model development and deployment. It outlines a model’s purpose, usage, performance metrics, and limitations, making it an essential tool for responsible AI governance amid increasing regulatory demands.

Read More »

Mastering ISO 42001 for Responsible AI Governance

This article provides a detailed overview of the ISO 42001 standard, the first international standard for AI management systems, emphasizing its importance for responsible AI governance. It discusses how organizations can innovate while mitigating risks by adhering to the principles outlined in ISO 42001.

Read More »

The Rising Threat of AI Jailbreaking in Enterprises

AI models have become more integrated into enterprise workflows, but they face significant security threats from jailbreak attempts that exploit their built-in restrictions. These deliberate efforts to bypass ethical and operational rules highlight vulnerabilities in AI governance, especially when enterprise models cannot consistently refuse harmful requests.

Read More »