Day: August 13, 2025

Texas Implements Groundbreaking AI Governance Law

Texas has enacted the Responsible Artificial Intelligence Governance Act (TRAIGA), which will take effect on January 1, 2026, introducing new regulations for the use of AI by companies and government entities. The law includes provisions for prohibiting certain uses of AI, requiring disclosures in healthcare, and creating an AI regulatory “sandbox.”

Read More »

Ensuring Safe Adoption of Generative AI: The Role of Output Inspection

As organizations adopt generative AI tools like Zoom AI Companion and Microsoft Copilot, they must establish robust guardrails to manage permissions and usage policies. However, inspection of AI-generated content is crucial to ensure compliance and control over what the AI produces, closing the gap between policies and actual outputs.

Read More »

Pillar Security Launches Comprehensive AI Security Framework

Pillar Security has developed an AI security framework called the Secure AI Lifecycle Framework (SAIL), aimed at enhancing the industry’s approach to AI security through strategy and governance. The framework outlines over 70 risks and provides mitigations to support secure AI adoption across various sectors.

Read More »

Evolving Compliance: Adapting to AI and Digital Systems in Clinical Research

The regulatory landscape for clinical development and pharmaceutical manufacturing is shifting as the European Commission proposes updates to Good Manufacturing Practice (GMP) guidelines, particularly regarding digital systems and artificial intelligence (AI). These changes aim to enhance compliance and oversight, ensuring that digital records and AI tools meet the evolving standards in clinical trials and drug development.

Read More »