Day: January 31, 2026

Emerging Threats of Low-Compute AI Models

Researchers are increasingly alarmed by the safety challenges posed by low-compute AI models, which now enable malicious actors to conduct sophisticated digital harm campaigns using consumer-grade hardware. Their findings reveal a troubling trend where smaller models rapidly approach the performance levels of larger systems, exposing critical gaps in current AI governance strategies.

Read More »

MongoDB Enhances AI Vendor Governance with Omnea Collaboration

Omnea is enhancing its procurement platform in the US through a partnership with MongoDB to improve governance and third-party risk controls amid rising AI adoption. This collaboration highlights the growing importance of structured procurement and compliance tools for AI-related services in the technology sector.

Read More »

Transforming AI from Hype to Boardroom Governance

As AI becomes central to boards seeking shareholder value, a gap emerges between ambitions and operational realities. Effective governance requires reframing AI as a comprehensive investment portfolio that links initiatives to financial outcomes while addressing risks and strategic impacts.

Read More »

Harnessing Shadow AI: Bridging the Leadership Gap

The rise of shadow AI presents significant risks to organizational security, with nearly 90% of employees using AI tools without informing IT departments. Leaders must shift from restrictive policies to a governance framework that enables safe, transparent AI adoption and turns risks into competitive advantages.

Read More »

AI Compliance: Bridging the Gap Between Law and Engineering

Fujitsu’s Dippu Singh highlights the limitations of checklist governance in AI compliance, emphasizing that the EU AI Act demands observable engineering realities rather than paperwork. He advocates for a shift to “Ethics by Design” to address dynamic AI risks and ensure effective compliance.

Read More »

Connecticut’s AI Legislation: What to Expect in 2026

Connecticut lawmakers face challenges in creating a unified AI regulatory framework amidst disagreements between pro-regulation senators and the Lamont administration. With the 2026 legislative session approaching, efforts to define AI policy and enhance consumer protections are intensifying.

Read More »

China’s AI Ecosystem: Driven by Private Innovation and Local Competition

China’s AI landscape reveals a decentralized ecosystem primarily driven by private companies rather than a centrally planned initiative, with local governments fostering innovation through fiscal competition. The analysis of over 6,000 generative AI models shows a diverse range of applications, highlighting the importance of commercial viability over a singular path toward artificial general intelligence.

Read More »

Empowering Innovation: The Right to Compute in New Hampshire

Jake Morabito, Senior Director of Policy at ALEC, submitted testimony to the New Hampshire House Commerce and Consumer Affairs Committee regarding HB 124. He highlights the impact of artificial intelligence on the state and advocates for a “Right to Compute” to protect individual liberties, promote responsible AI use, and ensure government restrictions are necessary and justified.

Read More »