Day: January 16, 2026

AI Tools in Healthcare: Unaddressed Risks to Patient Safety

The Veterans Health Administration’s use of generative AI chat tools for clinical care poses potential patient safety risks, as identified by the Department of Veterans Affairs Office of Inspector General. The analysis highlights the lack of formal mechanisms to identify and manage AI risks, raising concerns about patient safety in clinical settings.

Read More »

White House Aims for Unified Federal AI Regulation

The White House is advancing a strategy to centralize AI regulation, aiming to override state mandates that hinder innovation. OSTP Director Michael Kratsios emphasized the need for a uniform federal standard but gave limited details on collaboration with Congress during his recent testimony.

Read More »

EU and US Forge New AI Principles for Drug Development

The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have established new AI principles for drug development to enhance regulatory harmony between the EU and US. This landmark accord aims to improve accountability and ethical integrity throughout the drug development lifecycle.

Read More »

Kentucky Takes Action Against Character.AI for Child Safety Violations

On January 8, 2026, the Kentucky Attorney General filed a lawsuit against Character Technologies, Inc. for allegedly engaging in unfair and deceptive practices, particularly regarding the impact of its AI chatbot, Character.AI, on minors. The complaint highlights issues such as inadequate age verification, inappropriate interactions with children, and insufficient parental oversight tools.

Read More »

Managing AI for Sustainable Impact

As AI adoption grows, companies must integrate AI into materiality assessments and reporting systems to effectively manage its environmental and social impacts. This involves mapping AI demand, establishing policies for responsible use, and measuring impacts to prioritize sustainability.

Read More »

AI Regulation and Privacy: Emerging State-Level Approaches

Panelists at a Federal Communications Bar Association event emphasized privacy challenges linked to artificial intelligence, highlighting the increasing role of state-led governance. States like Colorado and Massachusetts are adopting different strategies to address AI privacy, reflecting varied confidence in existing laws to protect citizens in the evolving AI landscape.

Read More »

Empowering AI Governance for Business Success

As AI becomes ubiquitous, many companies struggle to extract meaningful value from their adoption. Distributed AI governance is essential for integrating AI safely and responsibly, enabling organizations to bridge the gap between innovation and value creation.

Read More »