AI in Legislation: Risks and Rewards

Governments Are Using AI To Draft Legislation. What Could Possibly Go Wrong?

As governments increasingly turn to artificial intelligence (AI) for legislative processes, the potential benefits and risks become clearer. A notable instance occurred when British officials faced the daunting task of reviewing tens of thousands of submissions for an independent water-sector overhaul. To expedite the process, they utilized an in-house AI tool named Consult, which is part of the “Humphrey” suite. This tool managed to sort over 50,000 responses into themes within a mere two hours, costing only £240. The implications of such tools could lead to an estimated savings of 75,000 days of manual analysis each year.

A spokesperson for the UK government remarked that “AI has the potential to transform how government works — saving time on routine administrative tasks and freeing up civil servants to focus on what matters most: delivering better public services for the British people.” However, the use of AI raises significant concerns regarding its responsible application, which the government claims to address through guidelines and audits.

Challenges of AI in Legislative Processes

Chris Schmitz, a researcher at the Hertie School in Berlin, argues that the real challenge of using AI is not just in analyzing consultation material, but in preventing the process from being manipulated. He points out a potential issue: if public participation is too easily gamed, it could lead to a loss of public consent. For instance, Schmitz notes that there exists a tool in the UK for objecting to planning applications that utilizes AI.

Global Adoption of AI in Legislation

The UK is not alone in its AI initiatives. The Italian Senate has also embraced AI to manage legislative overload by clustering similar proposals and flagging potential filibustering tactics. Recently, the European Commission announced plans to implement multilingual chatbots to assist users in navigating legal obligations under the EU AI and Digital Services Acts.

In Brazil, the Chamber of Deputies is expanding its Ulysses program, which classifies legislative material and includes tools for staff to utilize external AI platforms like Claude, Gemini, and GPT. This initiative promises strong security and transparency in its operations.

New Zealand’s Parliamentary Counsel Office has tested an AI proof of concept that generates initial drafts of explanatory notes related to legislation, while Estonia is considering using AI to check bills for errors following costly mistakes in draft legislation.

Legitimacy and Trust Issues

Prime Minister Kristen Michal of Estonia emphasized the importance of using AI to spot loopholes, but also cautioned that AI could be exploited by foreign states flooding government systems with submissions to skew public consultation outcomes. This could result in a form of legislative DDoS attack, where genuine engagement is drowned out by mass-generated input.

Trust in government and AI is already fragile; a recent survey indicated that only 29% of Britons trust their government to use AI accurately. The challenge for governments is to maintain transparency in AI-assisted decision-making processes to preserve legitimacy.

Human Oversight and Accountability

Experts like Ruth Fox from the Hansard Society stress the necessity of human oversight in AI outputs. “You still need human eyes and a human brain to check that the themes and sentiment it produces are actually accurate,” she warns. Furthermore, Joanna Bryson, an AI ethicist, points out the vulnerabilities of AI systems, such as outages and unforeseen model changes, underscoring the need for accountability in the democratic process.

The U.S. Approach to AI in Legislation

In the United States, the federal government is openly integrating AI, planning to use Google’s Gemini for drafting regulatory texts. However, this raises concerns about potential legal challenges if procedures are not strictly followed. Philip Wallach from the American Enterprise Institute cautions against using AI as a shortcut, emphasizing that embedding AI-generated drafting into processes can obscure errors that have significant governmental repercussions.

The Future of AI in Governance

While the integration of AI into legislative processes is not inherently negative, it necessitates careful consideration to avoid deepening public distrust. If designed thoughtfully, AI has the potential to modernize democratic processes and restore trust in governance. However, if mishandled, it could exacerbate existing issues, leading to further disengagement from the public.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...