AI Compliance Challenges for General Counsel

AI Compliance: Challenges for General Counsel

The inconsistent nature of artificial intelligence (AI) regulations worldwide presents significant and immediate challenges for organizations and their legal counsel. As businesses harness AI’s transformational power, general counsel (GC) everywhere must navigate a labyrinth of new regulations that are not just divergent but, in some cases, diametrically opposed. They must ensure regulatory compliance while contending with competing stakeholder interests, international tensions, and risks that change with the latest innovations in AI. Successful organizations will be those that identify AI’s true business value while implementing appropriate safeguards for high-risk applications.

Regulatory Landscape

The European Union (EU) has emerged as a regulatory pacesetter, advancing stringent, rights-based rules like the AI Act — a comprehensive, risk-based regulatory framework with strict compliance requirements and hefty penalties. On the other hand, the United States relies on a fragmented, sector-by-sector approach with agency guidelines and state actions addressing privacy and bias rather than a unified AI law. This approach creates a patchwork of compliance burdens — particularly for UK-based GCs — in high-stakes and highly regulated industries like healthcare, finance, and insurance.

Adding to the challenges GCs face is the rapid speed of AI innovation, which has intensified the discussion surrounding the balance between innovation and ethical oversight. Furthermore, the recent rollback of AI safety protocols by the new U.S. presidential administration has highlighted the need to find the right balance between spurring innovation and upholding ethical standards like fairness and accountability — which, along with healthy competition and supportive copyright laws, are necessary to maintain public confidence.

Navigating New Policies

Amid these changes, GCs worldwide must navigate the new policy front with agility and urgency — supporting their company’s AI ambitions while ensuring that safeguards exist for high-risk uses. Failing to comply with various laws can result in severe financial penalties, and regulators are increasingly scrutinizing AI uses that have the potential to affect human rights, safety, and fairness. Amid a continuously changing and complex regulatory landscape, a risk and use-case-focused approach is recommended for GCs to remain in compliance.

Rapid Corporate Implementation of AI Poses Risks

UK-based GCs find themselves in a particularly tough spot — caught between the EU’s comprehensive regulatory standards and the United States’ more fragmented approach. However, at the same time, they must also contend with the rapid implementation of AI across organizations. As AI plays an increasingly major role in corporate functions such as human resources and asset management, GCs must adjust their compliance strategies to regulatory changes.

An example of where AI is rapidly being woven into corporate functions is the performance reviews employees receive each year. New AI tools can summarize the feedback that employees receive from peers and managers and provide insights into their strengths, areas for improvement, and goal-setting opportunities. However, when these AI-assisted reviews influence legally sensitive decisions like pay, promotion, or termination, organizations must establish robust safeguards. Human oversight is crucial to ensure that the AI models used by HR produce accurate results consistently and any biases are mitigated.

The use of AI in HR introduces unique risks due to its “black box” reasoning, which is often opaque, and its resulting decisions, which may be more pervasive, widespread, or harder to account for than individual human biases. This raises liability issues akin to those surrounding self-driving cars: with autonomous vehicles, the question is who is liable for accidents; in HR, it’s who bears responsibility when systemic discrimination occurs. To foster trust and reduce such risks, fairness, transparency, and accountability must be embedded into all AI-driven processes.

In many organizations, the use of AI also extends far beyond HR and other internal processes. For instance, AI is widely used to improve customer help tools like chatbots. These modern chatbots can respond to unexpected questions and customize their answers to fit each customer, with the goal of creating a more satisfying customer experience. However, while the legal risks associated with AI agents in customer support are generally modest, landmines still exist. Poorly trained AI agents may provide inaccurate information, leading to customer frustration and potentially damaging customer relationships and the organization’s reputation.

Global Regulatory Challenges Ahead

While implementing AI presents reputational and ethical concerns, these are intensified by the broader challenges that GCs face in navigating new AI regulations. Given the complexities of this transformative technology, many organizations are not fully prepared for the regulatory challenges ahead. Whether counseling multinational corporations or UK-based organizations straddling divergent EU and U.S. regulations, GCs must understand the magnitude of this challenge.

In the United States, the decentralized regulatory approach has led to a mix of divergent state laws — a situation exacerbated by the new presidential administration’s deregulatory approach. This makes compliance a moving target. And while the current EU rules are clearly defined, they also require rigorous adherence, leaving little room for error. Complicating matters further is the reality that AI is evolving at a breakneck pace.

To address such challenges, organizations should use analytics and other advanced, data-driven tools to identify compliance gaps and adapt to policy changes. In addition, fostering internal collaboration across legal, compliance, and IT teams is crucial to seamlessly implement AI initiatives and stay in compliance as regulation evolves.

At the same time, regulations, risks, and opportunities will keep changing; therefore, GCs need to stay flexible and proactive. Global GCs, in particular, must align their compliance efforts across regions with very different regulatory philosophies, even as they prepare for future shifts. Taking part in industry discussions with policymakers can help influence how future AI governance frameworks are shaped.

A Use Case-Focused Approach to Compliance

For many GCs, the most effective way forward will be to tackle the current tangle of regulations according to specific use cases. GCs should prioritize the AI use cases that pose the greatest risk to individuals, organizations, or society at large, like hiring, employee performance evaluations, and high-stakes decision-making that impact consumers in regulated industries such as healthcare, insurance, and financial services. By identifying and addressing high-risk AI applications, organizations can ensure they comply with applicable regulations and ethical standards while fostering innovation and mitigating legal risks.

The following three use cases offer a useful illustration of this approach:

  • AI-Powered Hiring Tools: AI systems used in recruitment and hiring processes must be carefully monitored to ensure they do not perpetuate biases or violate anti-discrimination laws. Regular audits of hiring algorithms, combined with human oversight, are essential to maintaining fairness and compliance.
  • AI in Financial Decision-Making: In financial services, AI tools used for credit scoring, loan approvals, or investment decisions must meet rigorous standards of accuracy and fairness. GCs can play a pivotal role in implementing robust validation processes to ensure these tools meet both regulatory and ethical benchmarks.
  • AI in Healthcare Diagnostics: AI applications in healthcare, such as tools that diagnose illnesses or recommend treatments, carry life-and-death implications. Organizations must prioritize transparency, ensuring that these tools are thoroughly tested, explainable, and compliant with relevant healthcare regulations.

Although GCs work to address various use cases, they must also remain vigilant about avoiding “AI-washing” whereby organizations overstate the capabilities or ethical soundness of their AI systems. This will require transparency, accurate communication, and a commitment to validating the actual performance and limitations of AI tools.

Conclusion

The rapid implementation of AI in corporate processes and the fragmented nature of global regulation present significant challenges for organizations and their GCs, while simultaneously creating opportunities for thoughtful, proactive governance and innovation. This challenge is compounded by the rapid evolution of AI models and use cases, which often outpace lawmakers’ ability to draft rules ensuring their safe and fair use.

By adopting a use case-focused approach, GCs can bring clarity to this complex and quickly changing regulatory landscape. Prioritizing high-risk applications and implementing strategies that balance innovation with compliance becomes even more critical at a time when geopolitical variables — from the new administration’s deregulatory agenda to state-driven ambitions — could further alter the legal terrain. GCs who successfully navigate these dynamics will enable the organizations they serve to thrive as AI and the regulations that govern this transformational technology evolve.

More Insights

Harnessing Trusted Data for AI Success in Telecommunications

Artificial Intelligence (AI) is transforming the telecommunications sector by enhancing operations and delivering value through innovations like IoT services and smart cities. However, the...

Morocco’s Leadership in Global AI Governance

Morocco has taken an early lead in advancing global AI governance, as stated by Ambassador Omar Hilale during a recent round table discussion. The Kingdom aims to facilitate common views and encourage...

Regulating AI: The Ongoing Battle for Control

The article discusses the ongoing debate over AI regulation, emphasizing the recent passage of legislation that could impact state-level control over AI. It highlights the tension between innovation...

AI Readiness Framework for the Pharmaceutical Industry

This article presents an AI readiness assessment framework tailored for the pharmaceutical industry, emphasizing the importance of aligning AI initiatives with regulatory standards and ethical...

Enhancing AI Safety through Responsible Alignment

The post discusses the development of phi-3-mini in alignment with Microsoft's responsible AI principles, focusing on safety measures such as post-training safety alignment and red-teaming. It...

Mastering Sovereign AI Clouds in Intelligent Manufacturing

Sovereign AI clouds provide essential control and compliance for manufacturers, ensuring that their proprietary data remains secure and localized. As the demand for AI-driven solutions grows, managed...

Empowering Ethical AI in Scotland

The Scottish AI Alliance has released its 2024/2025 Impact Report, showcasing significant progress in promoting ethical and inclusive artificial intelligence across Scotland. The report highlights...

EU AI Act: Embrace Compliance and Prepare for Change

The recent announcement from the EU Commission confirming that there will be no delay to the EU AI Act has sparked significant reactions, with many claiming both failure and victory. Companies are...

Exploring Trustworthiness in Large Language Models Under the EU AI Act

This systematic mapping study evaluates the trustworthiness of large language models (LLMs) in the context of the EU AI Act, highlighting their capabilities and the challenges they face. The research...