Urgent Call for Comprehensive AI Legislation

Business and Policy Leaders Call for Cross-Industry AI Legislation

In a recent gathering, peers, business leaders, and policy experts have united in a renewed call for comprehensive cross-industry AI legislation. They warn that the government’s current “wait and see” approach is detrimental to businesses, citizens, and the economy.

UK businesses are being encouraged to rapidly adopt AI to boost productivity and growth. However, the lack of outcomes-focused legislation leaves corporate boards vulnerable to legal, financial, and reputational risks. This was emphasized during the launch of a new Parliamentary One Pager (POP) by Lord Holmes of Richmond, which advocates for a comprehensive framework for AI governance.

The Call for Action

At the event held in Westminster, various stakeholders, including policy experts and senior figures from UK think tanks, expressed their concerns over the government’s inadequate response to AI governance. Lord Holmes stated that the government’s current approach is ill-suited for a technology that is already influencing decisions across various sectors.

“Uncertainty is slowing AI adoption,” noted Erin Young, Head of Tech Policy at the Institute of Directors. She highlighted the growing expectation for mainstream UK businesses to adopt AI swiftly while also bearing the consequences if these systems fail. “On one hand, AI is critical for growth. You’ve got to adopt AI as quickly as possible,” she asserted. “But on the other hand, you’re responsible if it all goes wrong.”

Fragmented Regulatory Landscape

The current regulatory landscape is described as fragmented, creating anxiety among corporate boards. Young pointed out that directors are often required to approve AI strategies without a clear understanding of the associated risks and governance measures. In the absence of specific AI legislation, businesses are left to navigate a “patchwork of existing laws,” which inadequately address liability, accountability, and best practices.

“Who is liable if an AI system adopted by a company causes harm?” she questioned, emphasizing that this ambiguity disproportionately affects small and medium-sized enterprises (SMEs) that lack the legal resources to manage such uncertainties.

Trust and Governance

Gaia Marcus, Director of the Ada Lovelace Institute, warned against the piecemeal approach to AI regulation, which could lead to a “whack-a-mole approach” to managing AI risks. She reiterated that public trust in AI is crucial, highlighting a poll indicating that 91% of the public desire fair use of AI technologies.

Hannah Perry, Director at Demos Digital, argued that effective governance could help restore trust between citizens and the state, which is currently in decline. She urged for “binding and enforceable, cross-sector, human rights-based AI regulation,” emphasizing its necessity in the evolving landscape of technology.

Conclusion

The consensus among business leaders at the event was clear: without legal clarity, responsible AI adoption will stagnate. “Good governance doesn’t stop innovation. It enables innovation,” Young concluded, underlining the need for a governance framework that instills confidence in corporate investment in AI technologies.

Lord Holmes summed up the discussion by stating that the current status quo is failing on multiple fronts: “It’s not working for citizens, it’s not working for our society, it’s not working for our communities, and it’s not working for business.”

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...