Stay updated with the latest industry insights on AI compliance.

Legal Pitfalls of AI‑Powered Hiring

AI can boost hiring efficiency but also creates legal risks if used without safeguards; employers must validate, monitor, and maintain human oversight of AI tools. Implementing transparency, bias testing, and clear documentation helps ensure compliance with emerging state and federal regulations.

AI‑Enabled Medical Devices: Roles and Responsibilities

The article explains how the EU MDR and AI Act currently assign overlapping responsibilities to manufacturers (as defined by the MDR) and providers (as defined by the AI Act) for AI‑based medical devices, and outlines how a proposed 2025 EU regulation would simplify this by placing AIMDs solely under the MDR. It also highlights the distinction between MDR “users” (individual clinicians or patients) and AIA “deployers” (organizations with authority over AI system use), and discusses the potential regulatory impact of the upcoming legislative changes.

Ethical AI: Designing Trustworthy Systems

The post discusses the urgent need to embed ethical principles into AI design and evolution, proposing a framework that combines Ethics by Design with Ethics by Evolution to ensure transparency, accountability, and societal benefit. It emphasizes inclusive stakeholder involvement and continuous monitoring throughout the AI lifecycle to mitigate biases and uphold environmental responsibility and sovereignty.

Ensuring AI Transparency in Broadcasting

The webinar will explore how broadcasters must ensure AI transparency and credibility, covering emerging regulations like the EU AI Act that demand clear disclosures and human oversight. It will provide practical guidance on implementing explainability and compliance measures to maintain audience trust in AI-driven media.

AI‑Powered Global Expansion Meets Regulation

AI-driven startups can now enter new markets faster, but emerging regulations like the EU AI Act are becoming the primary bottleneck. Companies that embed compliance early can turn these rules into a competitive advantage.

Future-Proof AI Contracts: Managing Risk and Responsibility

AI contracting presents unresolved IP ownership, data confidentiality, and liability challenges, urging organizations to craft contracts with clear ownership structures, indemnities, and privacy safeguards. Additionally, adopting AI risk frameworks and future-proofing agreements will help mitigate emerging regulatory and operational risks.

Boosting U.S. AI Innovation with the CREATE Act

The CREATE AI Act aims to establish a National Artificial Intelligence Research Resource that will provide researchers, educators, and students with broader access to AI tools, data, and training resources. Lawmakers argue this will democratize AI development, boost economic growth, and maintain U.S. leadership in AI while addressing safety and ethical concerns.

Colorado Senate Pushes AI Accountability Bill for Consumer Protection

Senate Bill 189, introduced by Colorado Senate Majority Leader Robert Rodriguez, aims to regulate AI-driven consequential decisions by requiring transparency, consumer notice, and a limited right to human review while simplifying liability compared to the 2024 law. The bill has garnered mixed support, with business and consumer groups cautiously optimistic about its more manageable compliance requirements.

AI Governance Spurs Trustible’s Expansion in Healthcare and Enterprise

Trustible is expanding its AI governance role in highly regulated sectors like healthcare and legal investigations, helping large providers and payors ensure auditable, evidence-based oversight as AI adoption grows. The company also partners with firms such as Nuix to make AI deployments defensible and compliant with stringent data-governance requirements.

xAI Challenges Colorado AI Law in Landmark Lawsuit

xAI has sued Colorado, claiming the state's AI Act violates the First Amendment, the Dormant Commerce Clause, and the Fourteenth Amendment by imposing burdensome compliance and viewpoint-based restrictions. The lawsuit could set a precedent for AI governance nationwide, influencing how companies document and manage high-risk AI systems.

China’s AI Crackdown Forces Companies to Prioritize Compliance

China has launched a four-month enforcement campaign targeting AI misuse, including weak model security, data poisoning, unregistered deployments, and labeling failures. The crackdown signals that comprehensive compliance will become a core operating requirement for all AI platforms in the Chinese market.

EU AI Act Delay Threatens High-Risk Compliance Timelines

EU legislators failed to agree on amendments to the EU AI Act, pausing talks on the Digital Omnibus that would postpone key compliance deadlines for high‑risk AI systems. Consequently, the current deadlines remain, with obligations for high-risk AI systems taking effect in August 2026, prompting organizations to start building governance programs now.

UK Sets New Standards for AI Deployment

Liz Kendall announced that the UK will launch a new AI Hardware Plan at London Tech Week in June and aims to secure 5% of the global AI chips market, while also committing to publish best-practice guidance on AI model evaluation at the international AI Security Institutes meeting in July. The government’s strategy focuses on supporting British AI companies and collaborating with other middle-power nations to set global standards for safe AI deployment.

Europe’s AI Act Faces Political and Technological Sharks

The EU AI Act, the world’s first binding AI regulation, aims to protect democracy by banning high-risk practices and requiring watermarks on AI-generated content, yet its implementation faces political pressure and delays. MEP Brando Benifei defends the law, arguing that regulating enduring human contexts rather than specific technologies will ensure its lasting impact.

What the EU AI Act Means for High-Risk Systems

As the EU AI Act takes effect, organizations must adhere to new preparatory obligations for high-risk AI systems, ensuring compliance with safety and rights regulations. Experts emphasize the importance of operationalizing these rules to foster responsible AI innovation across Europe.

John Snow Labs Sets a New Standard for AI Governance

John Snow Labs earned the Pacific AI Governance Certification, validating regulatory-grade controls for healthcare AI systems. The certification emphasizes risk management, bias mitigation, robustness, safety, auditability, and compliance readiness for real-world clinical deployment.

AI Governance: Bridging the Gap in Financial Services

Financial services are deploying AI at scale, but governance and oversight are lagging, increasing operational, regulatory, and trust risks. Firms are being pushed toward continuous monitoring, lifecycle controls, and stronger documentation as AI-driven decisions become mainstream.

AI Settlements Highlight Growing Responsibility in Tech Regulation

Google and Character.AI have settled lawsuits filed by families who claim that interactions with AI chatbots contributed to teenagers' self-harm and suicides. This landmark case underscores the urgent need for stronger AI regulation and accountability in the U.S. legal system.

Legal Challenges in AI-Driven Pharmacovigilance

As regulators worldwide scrutinize AI use in pharmacovigilance, in-house counsel must tackle complex issues such as accountability when algorithms fail and the auditing of black-box systems. Merck's legal and pharmacovigilance teams share their strategies for managing these emerging risks.

Schellman Becomes the First Accredited Auditor for AIUC-1

Schellman has become the first authorized auditor for AIUC-1, the new security standard for AI agents, addressing the growing challenges of AI compliance. This partnership enhances the suite of AI certification services available to clients, promoting security, safety, and reliability in AI systems.

South Africa’s Bold Steps Toward AI Regulation and Innovation

South Africa's Bold AI Policy: New Institutions and Incentives for Innovation

AI Responsibility in the Workplace: Legal Challenges Ahead

AI and employment: Redefining workplace accountability

Key Compliance Changes for AI Companion Regulations in Washington and Oregon

New AI Companion Regulations in Washington and Oregon: What You Need to Know

xAI’s Legal Challenge to Colorado’s AI Regulation: A Turning Point for Innovation and Oversight

xAI's legal challenge against Colorado's AI regulation.

EU Classifies ChatGPT as VLOSE: New Challenges for AI Regulation

EU Designates ChatGPT as Major Search Engine, Elevating AI Regulation

Establishing an Effective AI Governance Framework

Establishing an effective AI governance framework is crucial for organizations to manage risks, ensure compliance, and foster responsible AI adoption.

AI Omnibus: Key Developments Ahead of Trilogues

AI Omnibus: Key Developments as Trilogues Commence

xAI Challenges Colorado’s AI Regulation in Federal Court

xAI Challenges Colorado's AI Regulation in Landmark Lawsuit

AIC4 Compliance: Ensuring Secure AI in Cloud Services

Supporting Compliance with the EU AI Act: AIC4's Role for Cloud Providers

California’s AI Procurement Overhaul

California's New AI Procurement Standards Aim for Accountability

China’s New Rules on AI Companions: Protecting Minors and Promoting Safe Innovation

China regulates AI companion services for minors, emphasizing safety, innovation, and user protection in emerging AI technologies.

xAI Challenges Colorado’s Groundbreaking AI Regulation Law

xAI challenges Colorado's AI regulation law in a landmark lawsuit, raising important debates about innovation, free speech, and regulation.

Plans for Safe AI Innovation in Financial Services

Plans for Safe AI Innovation in Financial Services

Start with a 14-day free trial.