Category: AI

Governance Gaps in AI Surveillance Across the Asia-Pacific

The Asia-Pacific region is experiencing a rapid expansion of AI-powered surveillance technologies, especially from Chinese companies, yet lacks the governance frameworks to regulate their use effectively. This creates a significant risk as these technologies can be repurposed to consolidate political control and suppress dissent.

Read More »

Embedding AI in Financial Crime Prevention: Best Practices

Generative AI is rapidly gaining attention in the financial sector, prompting firms to integrate this technology responsibly into their anti-financial crime frameworks. Experts emphasize the importance of strong governance, transparency, and human oversight to ensure that AI models are effective and compliant with evolving regulations.

Read More »

Harnessing AI for Smarter Tax Solutions

Artificial intelligence (AI) is revolutionizing tax compliance by automating processes, improving accuracy, and enhancing efficiency in tax operations. With its ability to analyze large datasets, AI provides valuable insights that help businesses navigate the complexities of tax regulations and optimize their strategies.

Read More »

Harnessing Compliance: Seizing Opportunities Amid EU AI Regulation Delays

The European Union’s Artificial Intelligence Act (AI Act) is poised to reshape global AI governance, yet compliance deadlines are approaching amid delays in finalizing essential guidelines. This situation provides a strategic opportunity for proactive firms to align with emerging standards and gain competitive advantages in key sectors like healthcare, autonomous systems, and data analytics.

Read More »

Rethinking AI Regulation: The Case for Entity-Based Governance

The paper discusses the debate on whether frontier AI regulation should focus on the core technology (AI models) or its uses, highlighting the challenges of both model-based and use-based approaches. It proposes an alternative approach: entity-based regulation that targets the large business entities developing powerful AI systems, aiming to better address the unique risks posed by frontier AI.

Read More »

Texas Implements Groundbreaking AI Governance Law

Texas has enacted the Responsible Artificial Intelligence Governance Act (TRAIGA), which will take effect on January 1, 2026, introducing new regulations for the use of AI by companies and government entities. The law includes provisions for prohibiting certain uses of AI, requiring disclosures in healthcare, and creating an AI regulatory “sandbox.”

Read More »

Ensuring Safe Adoption of Generative AI: The Role of Output Inspection

As organizations adopt generative AI tools like Zoom AI Companion and Microsoft Copilot, they must establish robust guardrails to manage permissions and usage policies. However, inspection of AI-generated content is crucial to ensure compliance and control over what the AI produces, closing the gap between policies and actual outputs.

Read More »

Pillar Security Launches Comprehensive AI Security Framework

Pillar Security has developed an AI security framework called the Secure AI Lifecycle Framework (SAIL), aimed at enhancing the industry’s approach to AI security through strategy and governance. The framework outlines over 70 risks and provides mitigations to support secure AI adoption across various sectors.

Read More »

Evolving Compliance: Adapting to AI and Digital Systems in Clinical Research

The regulatory landscape for clinical development and pharmaceutical manufacturing is shifting as the European Commission proposes updates to Good Manufacturing Practice (GMP) guidelines, particularly regarding digital systems and artificial intelligence (AI). These changes aim to enhance compliance and oversight, ensuring that digital records and AI tools meet the evolving standards in clinical trials and drug development.

Read More »

AI-Generated Band Sparks Controversy in Music Streaming

The Velvet Sundown, an AI-generated band, garnered millions of streams on Spotify before being exposed as a synthetic creation, igniting a debate on copyright and content regulation in the music industry. The incident highlights the lack of oversight regarding AI-generated content on streaming platforms and raises concerns about transparency and the impact on human musicians.

Read More »