Category: AI Regulation

AI Security and Safety: Building Trustworthy Systems Amid New Risks

As enterprises increasingly integrate AI capabilities into their operations, they must address the evolving risks associated with cybersecurity. A comprehensive approach that unifies AI security with traditional enterprise cybersecurity practices is essential for mitigating these threats while ensuring the trustworthiness of AI systems.

Read More »

Reforming the AI Act: A Critical Opportunity for Innovation Before 2026

Concerns have been raised that certain provisions of the EU’s Artificial Intelligence Act could hinder innovation and deter investment, particularly for start-ups and SMEs. The AI Act’s high-risk requirements will not apply until August 2026, presenting an opportunity for the EU to reflect and potentially reform this framework before it takes effect.

Read More »

Italy Enacts Comprehensive AI Legislation

On September 23, 2025, Italy signed into law the Italian AI Law, which introduces provisions that complement the EU AI Act and designates authorities for oversight. The law addresses key areas such as data use in healthcare, employment obligations regarding AI in the workplace, and copyright protections for AI-generated works.

Read More »

Korea’s AI Law: Balancing Innovation and Regulation

South Korea has introduced the world’s first law combining AI strategy, industrial promotion, and regulation, known as the Framework Act on the Promotion and Trust of Artificial Intelligence. However, this integrated approach poses risks, as the regulations may hinder industrial growth by imposing burdensome requirements that do not necessarily enhance safety or innovation.

Read More »

Shadow AI: Balancing Productivity and Security Risks in Corporations

In corporate America, employees are increasingly using unauthorized AI tools to enhance productivity, often without their employers’ awareness, leading to significant security risks. This shadow adoption of AI raises concerns about data breaches and compliance violations, as sensitive information is shared with unregulated systems.

Read More »

Italy Leads Europe with Groundbreaking National AI Regulations

On September 17, 2025, the Italian Parliament approved the AI Law, the first national legislation in the EU governing artificial intelligence, which will take effect on October 10. This law establishes core principles for AI use, focusing on human oversight, personal data protection, and specific provisions for critical sectors such as healthcare and labor.

Read More »

AI Oversight Failures Exposed in Deloitte’s $440k Report Blunder

Deloitte recently faced backlash after admitting to using generative AI in a report for the Australian federal government, which included fictitious citations and distorted quotes. This incident highlights a growing issue of “AI slop,” where flawed machine-generated content is presented as credible, underscoring the need for improved oversight in AI usage.

Read More »

AI Compliance Trends: From California to New York City

As artificial intelligence becomes more integrated into organizations, lawmakers are developing compliance frameworks to regulate its use by employers. Key legislations in states like California, Colorado, and New York City are establishing requirements to ensure that AI-driven employment decisions are transparent and non-discriminatory.

Read More »

IndiaAI Unveils Five Innovative Projects for AI Safety and Trust

IndiaAI has selected five innovative projects aimed at enhancing the “Safe and Trusted AI” framework, focusing on deepfake detection, bias mitigation, and generative AI security. These initiatives reflect India’s commitment to developing reliable and responsible AI technologies while ensuring ethical practices in the deployment of AI systems.

Read More »

California’s Landmark Law on Frontier AI Regulation

California has enacted the nation’s first law regulating frontier artificial intelligence, known as the Transparency in Frontier Artificial Intelligence Act (TFAIA), which mandates large AI developers to publish safety frameworks and report critical safety incidents. The law aims to balance innovation with necessary safeguards to ensure public safety and accountability in AI development.

Read More »