Day: February 18, 2026

AI Tools and Attorney-Client Privilege: A Critical Court Ruling

On February 10, 2026, the U.S. District Court for the Southern District of New York ruled that documents generated using an AI tool are not protected by attorney-client privilege, even when privileged information is included. This ruling calls for companies to reassess their AI usage policies to prevent risks to legal privilege.

Read More »

Revolutionizing Music: The First Ethical AI Platform for Artists

SoundBreak is an AI-driven music platform partnering with recording artists and songwriters to create an ethical, artist-first environment. It enables fans and musicians to co-write songs using AI models based on real artists’ styles, ensuring artists are credited, protected, and compensated for their work.

Read More »

California Takes a Stand on AI Image Accountability

California Attorney General Rob Bonta is launching an AI accountability initiative to regulate non-consensual image generation by Elon Musk’s xAI company. He emphasizes the importance of state-level regulation to address AI and social media harms, comparing these challenges to the opioid crisis.

Read More »

Advancing AI Governance: Insights from HUDERIA Platform II

The HUDERIA Platform II, held on 12–13 February 2026 in Strasbourg, facilitated a collaborative exchange on AI risk and governance, focusing on the practical implementation of the HUDERIA Methodology. Discussions included updates on international AI governance developments, the Context-Based Risk Analysis (COBRA) Resources, and the operationalization of these resources for future HUDERIA Model components.

Read More »

Seton Hall’s Ethical AI Initiative Empowers Students and Faculty

Seton Hall University has established an Artificial Intelligence Advisory Council (AIAC) to guide the ethical integration of AI technologies in alignment with its Catholic mission. The council aims to enhance student learning through ethical AI literacy training and provides resources and recommendations for classroom AI usage policies.

Read More »

Bridging the Governance Gap in Autonomous Agent Systems

The rapid advancement of Agent2Agent (A2A) and the Agent Communication Protocol (ACP) has created a governance gap, challenging organizations to maintain accountability as autonomous agents operate with increasing independence. Implementing an ‘Agent Treaty’ layer is crucial to make policies machine-enforceable and ensure oversight of agent actions.

Read More »

Dutch Watchdog Warns of AI’s Unchecked Future

A Dutch data protection watchdog cautions that without strong governance, generative AI could turn into a “wild west” scenario, posing serious societal risks. The authority stresses the importance of regulation that upholds fundamental rights and democratic values to ensure responsible AI development and use.

Read More »

Tesla’s Grok AI Expands to Europe Amid Regulatory Challenges

Tesla is expanding the Grok AI chatbot to its electric vehicles in the UK and Europe, offering voice-activated assistance amid growing regulatory scrutiny on AI safety and data privacy. This rollout aims to enhance in-car experiences but faces challenges in complying with European data protection standards.

Read More »