Category: Artificial Intelligence Governance

AI Innovations in Compliance and Governance

At the recent Pharmaceutical Compliance Congress West in San Diego, Valerie Webb and David Morris discussed how AI enhances compliance efficiency and governance. Webb shared her experience using AI to streamline third-party risk management, significantly reducing manual tasks and improving vendor oversight.

Read More »

California’s AI Transparency Revolution

On September 29, 2025, California Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, which requires large AI developers to publicly disclose safety frameworks and transparency reports about their models. This legislation aims to enhance accountability and safety in AI development while establishing whistleblower protections for employees in the field.

Read More »

California’s Groundbreaking AI Safety Disclosure Law

On September 29, California Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act, making California the first state to mandate public safety disclosures from developers of advanced AI models. The law aims to enhance transparency and accountability among large AI developers, requiring them to disclose safety risk management practices and report critical safety incidents.

Read More »

AI in Governance: Are We Ready for the Transition?

Recent developments in Albania and Japan have brought algorithmic governance into the public eye, with Albania’s digital assistant managing procurement processes and Japan’s Path to Rebirth party declaring an AI as its leader. These cases highlight the shift of algorithmic decision-making from a behind-the-scenes function to an openly acknowledged institutional role, raising questions about legitimacy and accountability in governance.

Read More »

California Enacts Groundbreaking AI Regulation Law

California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act, also known as SB 53, into law, establishing new regulations for top AI companies that mandate transparency and reporting of AI-related safety incidents. This groundbreaking legislation is the first of its kind in the U.S. and aims to balance innovation in the AI industry with necessary safety measures.

Read More »

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the U.S. Office of Science and Technology Policy emphasized that Washington opposes centralized control over AI, advocating instead for responsible diffusion and national sovereignty.

Read More »

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational inefficiencies. Experts urge businesses to implement robust governance frameworks to navigate these risks effectively and avoid the pitfalls seen in past technology deployments.

Read More »

US Rejects Global AI Governance at UN General Assembly

The United States rejected calls for international oversight of artificial intelligence at the U.N. General Assembly, emphasizing the importance of national sovereignty over centralized governance. This stance contrasted with global leaders advocating for collaborative frameworks to address the challenges posed by AI.

Read More »

Ethical AI Assessment in Latin America: Key Insights and Innovations

UNESCO has developed the Ethical Impact Assessment (EIA) to help institutions evaluate the ethical implications of AI projects, promoting proactive governance and alignment with core ethical values. The EIA was piloted across Latin America, providing valuable insights that will enhance its usability and support responsible AI development.

Read More »