Business and Policy Leaders Call for Cross-Industry AI Legislation
In a recent gathering, peers, business leaders, and policy experts have united in a renewed call for comprehensive cross-industry AI legislation. They warn that the government’s current “wait and see” approach is detrimental to businesses, citizens, and the economy.
UK businesses are being encouraged to rapidly adopt AI to boost productivity and growth. However, the lack of outcomes-focused legislation leaves corporate boards vulnerable to legal, financial, and reputational risks. This was emphasized during the launch of a new Parliamentary One Pager (POP) by Lord Holmes of Richmond, which advocates for a comprehensive framework for AI governance.
The Call for Action
At the event held in Westminster, various stakeholders, including policy experts and senior figures from UK think tanks, expressed their concerns over the government’s inadequate response to AI governance. Lord Holmes stated that the government’s current approach is ill-suited for a technology that is already influencing decisions across various sectors.
“Uncertainty is slowing AI adoption,” noted Erin Young, Head of Tech Policy at the Institute of Directors. She highlighted the growing expectation for mainstream UK businesses to adopt AI swiftly while also bearing the consequences if these systems fail. “On one hand, AI is critical for growth. You’ve got to adopt AI as quickly as possible,” she asserted. “But on the other hand, you’re responsible if it all goes wrong.”
Fragmented Regulatory Landscape
The current regulatory landscape is described as fragmented, creating anxiety among corporate boards. Young pointed out that directors are often required to approve AI strategies without a clear understanding of the associated risks and governance measures. In the absence of specific AI legislation, businesses are left to navigate a “patchwork of existing laws,” which inadequately address liability, accountability, and best practices.
“Who is liable if an AI system adopted by a company causes harm?” she questioned, emphasizing that this ambiguity disproportionately affects small and medium-sized enterprises (SMEs) that lack the legal resources to manage such uncertainties.
Trust and Governance
Gaia Marcus, Director of the Ada Lovelace Institute, warned against the piecemeal approach to AI regulation, which could lead to a “whack-a-mole approach” to managing AI risks. She reiterated that public trust in AI is crucial, highlighting a poll indicating that 91% of the public desire fair use of AI technologies.
Hannah Perry, Director at Demos Digital, argued that effective governance could help restore trust between citizens and the state, which is currently in decline. She urged for “binding and enforceable, cross-sector, human rights-based AI regulation,” emphasizing its necessity in the evolving landscape of technology.
Conclusion
The consensus among business leaders at the event was clear: without legal clarity, responsible AI adoption will stagnate. “Good governance doesn’t stop innovation. It enables innovation,” Young concluded, underlining the need for a governance framework that instills confidence in corporate investment in AI technologies.
Lord Holmes summed up the discussion by stating that the current status quo is failing on multiple fronts: “It’s not working for citizens, it’s not working for our society, it’s not working for our communities, and it’s not working for business.”