Category: AI Compliance

AI, Labor Law, and the Future of Work

The document discusses the integration of artificial intelligence (AI) in the workplace and the legal challenges it presents, particularly in human resources. It emphasizes the need for regulatory frameworks to protect employees while leveraging AI’s potential benefits.

Read More »

Understanding Compliance with the Colorado Artificial Intelligence Act

The Colorado Artificial Intelligence Act (CAIA), set to take effect on February 1, 2026, introduces comprehensive regulations for high-risk AI systems, impacting various industries such as finance, healthcare, and employment. Businesses must assess their AI usage, implement compliance measures, and ensure transparency to avoid significant liabilities and uphold consumer rights.

Read More »

AI Compliance Challenges for General Counsel

The inconsistent nature of artificial intelligence (AI) regulations worldwide presents significant challenges for organizations and their legal counsel as they strive for compliance amid rapid technological advancements. General counsel must navigate a complex regulatory landscape while ensuring safeguards for high-risk applications to foster innovation and maintain ethical standards.

Read More »

Building Inclusive AI for a Diverse Future

In an AI-driven era, it is essential to ensure that AI solutions are accessible and inclusive for the disabled community, as over 380 million working-age adults live with disabilities globally. However, a significant lack of high-quality disability data in AI development poses risks of perpetuating existing barriers for these individuals.

Read More »

EIOPA’s Insights on AI Governance in Insurance

On February 12, 2025, the European Insurance and Occupational Pensions Authority (EIOPA) published a consultation on its draft opinion regarding artificial intelligence (AI) governance and risk management. The Opinion provides guidance for insurance undertakings on the responsible use of AI systems in the insurance value chain, emphasizing the importance of proportionality in governance and risk management measures.

Read More »

New Product Liability Challenges for AI Innovations

The new EU Product Liability Directive 2024/2853, which came into force on December 8, 2024, significantly modernizes product liability rules and explicitly includes software and AI-integrated products. Companies using AI in their products must be aware that they can be held liable for damages caused by software defects, including issues arising from insufficient updates or cybersecurity weaknesses.

Read More »

Banks Embrace Responsible AI Talent Amid Growing Adoption

As the financial sector advances its AI initiatives, banks have significantly increased their hiring of governance and ethical use professionals, with 41 out of 50 large banks now employing dedicated AI governance staff. Research indicates that responsible AI practices are becoming integral to banking operations, with leading institutions conducting in-house studies to inform their adoption strategies.

Read More »

Compliance Challenges of the EU AI Act: Key Insights for Organizations

In a recent interview, David Dumont discusses the implications of the EU AI Act, highlighting how organizations can utilize existing GDPR frameworks to address new compliance obligations. He emphasizes the need for businesses to monitor national enforcement variations and prepare for potential legal challenges as the AI Act evolves.

Read More »

Unlocking Compliance: The MSP’s Role in the AI Revolution

The 2020s are the decade of artificial intelligence, with market growth projected to reach $826 billion by 2030. This rapid advancement necessitates responsible AI governance, presenting both challenges and opportunities for managed service providers (MSPs) to offer Compliance-as-a-Service (CaaS) solutions.

Read More »

Driving Ethical AI Compliance in Europe

The EU-funded initiative CERTAIN aims to drive ethical AI compliance in Europe by developing tools and frameworks that promote transparency and sustainability in AI technologies. Launched in January 2025, the project seeks to address regulatory and ethical challenges while fostering public trust in AI systems.

Read More »