Governments Are Using AI To Draft Legislation. What Could Possibly Go Wrong?
As governments increasingly turn to artificial intelligence (AI) for legislative processes, the potential benefits and risks become clearer. A notable instance occurred when British officials faced the daunting task of reviewing tens of thousands of submissions for an independent water-sector overhaul. To expedite the process, they utilized an in-house AI tool named Consult, which is part of the “Humphrey” suite. This tool managed to sort over 50,000 responses into themes within a mere two hours, costing only £240. The implications of such tools could lead to an estimated savings of 75,000 days of manual analysis each year.
A spokesperson for the UK government remarked that “AI has the potential to transform how government works — saving time on routine administrative tasks and freeing up civil servants to focus on what matters most: delivering better public services for the British people.” However, the use of AI raises significant concerns regarding its responsible application, which the government claims to address through guidelines and audits.
Challenges of AI in Legislative Processes
Chris Schmitz, a researcher at the Hertie School in Berlin, argues that the real challenge of using AI is not just in analyzing consultation material, but in preventing the process from being manipulated. He points out a potential issue: if public participation is too easily gamed, it could lead to a loss of public consent. For instance, Schmitz notes that there exists a tool in the UK for objecting to planning applications that utilizes AI.
Global Adoption of AI in Legislation
The UK is not alone in its AI initiatives. The Italian Senate has also embraced AI to manage legislative overload by clustering similar proposals and flagging potential filibustering tactics. Recently, the European Commission announced plans to implement multilingual chatbots to assist users in navigating legal obligations under the EU AI and Digital Services Acts.
In Brazil, the Chamber of Deputies is expanding its Ulysses program, which classifies legislative material and includes tools for staff to utilize external AI platforms like Claude, Gemini, and GPT. This initiative promises strong security and transparency in its operations.
New Zealand’s Parliamentary Counsel Office has tested an AI proof of concept that generates initial drafts of explanatory notes related to legislation, while Estonia is considering using AI to check bills for errors following costly mistakes in draft legislation.
Legitimacy and Trust Issues
Prime Minister Kristen Michal of Estonia emphasized the importance of using AI to spot loopholes, but also cautioned that AI could be exploited by foreign states flooding government systems with submissions to skew public consultation outcomes. This could result in a form of legislative DDoS attack, where genuine engagement is drowned out by mass-generated input.
Trust in government and AI is already fragile; a recent survey indicated that only 29% of Britons trust their government to use AI accurately. The challenge for governments is to maintain transparency in AI-assisted decision-making processes to preserve legitimacy.
Human Oversight and Accountability
Experts like Ruth Fox from the Hansard Society stress the necessity of human oversight in AI outputs. “You still need human eyes and a human brain to check that the themes and sentiment it produces are actually accurate,” she warns. Furthermore, Joanna Bryson, an AI ethicist, points out the vulnerabilities of AI systems, such as outages and unforeseen model changes, underscoring the need for accountability in the democratic process.
The U.S. Approach to AI in Legislation
In the United States, the federal government is openly integrating AI, planning to use Google’s Gemini for drafting regulatory texts. However, this raises concerns about potential legal challenges if procedures are not strictly followed. Philip Wallach from the American Enterprise Institute cautions against using AI as a shortcut, emphasizing that embedding AI-generated drafting into processes can obscure errors that have significant governmental repercussions.
The Future of AI in Governance
While the integration of AI into legislative processes is not inherently negative, it necessitates careful consideration to avoid deepening public distrust. If designed thoughtfully, AI has the potential to modernize democratic processes and restore trust in governance. However, if mishandled, it could exacerbate existing issues, leading to further disengagement from the public.