Category: AI Regulation

US Lawmakers Push to Ban Adversarial AI Amid National Security Concerns

A bipartisan group of U.S. lawmakers has introduced the “No Adversarial AI Act,” aiming to ban the use of artificial intelligence tools from countries like China, Russia, Iran, and North Korea in federal agencies. This legislation reflects growing concerns over AI technologies that may compromise national security, particularly in light of scrutiny surrounding the Chinese AI firm DeepSeek.

Read More »

New Safeguard Tiers for Responsible AI in Amazon Bedrock

Amazon Bedrock Guardrails now offers safeguard tiers, allowing organizations to implement customizable safety controls for their generative AI applications. This tiered approach enables companies to select appropriate safeguards based on specific needs, balancing safety and performance across various use cases.

Read More »

Texas Takes Charge: New AI Governance Law Enacted

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, asserting the state’s right to legislate on consumer protection and AI use. The law aims to promote responsible AI development while safeguarding individuals from associated risks and ensuring transparency in AI interactions.

Read More »

Tech Giants Push Back: Delaying the EU’s AI Act

Meta and Apple are urging the European Union to delay the implementation of its landmark AI Act, citing concerns that the current timeline may hinder innovation and overwhelm businesses. As the world’s first comprehensive AI regulation is set to take effect in August 2025, many companies remain unprepared for compliance, raising fears of stifled growth, particularly for smaller firms.

Read More »

EU Tech Giants Call for Pause on AI Act to Foster Innovation

A coalition of tech companies, including Alphabet, Meta, and Apple, has urged EU leaders to pause key aspects of the AI Act due to concerns that it may stifle innovation. They argue that without a delay, the current timeline for implementation could hinder European businesses in comparison to their competitors in the U.S. and China.

Read More »

Bridging the AI Governance Talent Gap

The rapid evolution of generative artificial intelligence has highlighted a critical vulnerability in the form of a growing talent gap in AI governance. As organizations adopt these technologies, many legal departments find themselves unprepared to manage the associated regulatory, ethical, and operational risks.

Read More »

EU’s AI Act: A Call for Caution Amid Innovation Concerns

The EU is being urged to delay the rollout of the AI Act due to missing frameworks and legal uncertainties that could hinder AI innovation. Industry group CCIA Europe warns that a rushed implementation may jeopardize the bloc’s economic ambitions and its competitiveness in the AI sector.

Read More »

Experts Needed: Join the EU’s AI Scientific Panel

The European Commission is establishing a scientific panel of independent experts to assist in implementing the AI Act, focusing on general-purpose AI models and systems. The panel will advise on systemic risks and model classification, with applications closing on September 14.

Read More »

AI Act Under Fire: Urgent Call for Caution in Europe

The Computer & Communications Industry Association (CCIA Europe) has cautioned EU leaders about the potential risks of implementing the AI Act without a finalized framework, stressing that critical provisions for general-purpose AI models still lack essential guidance. They warn that a rushed rollout could jeopardize the EU’s AI ambitions and hinder innovation in the sector.

Read More »