AI’s Role in Transforming Criminal Court Efficiency

Leveson on AI in the Criminal Courts – Efficiency and the Limits of Automation

In remarks to the Westminster Legal Policy Forum on February 26, Sir Brian Leveson framed AI as a practical lever for tackling criminal court delays – but only with tight safeguards, improved technical capability across the system, and a preference for closed tools.

The Connection Between Delay and Technology

Sir Brian Leveson used his speech to connect two strands of the criminal courts debate that are often treated separately: the scale of delay and the operational role of technology in reducing it. This was presented as a companion to his Independent Review of the Criminal Courts, which set out 180 recommendations, including 31 focused on technology and AI as enablers of reform.

With the Crown Court backlog close to 80,000 cases, Leveson described a system where victims and defendants wait so long that confidence diminishes, witnesses disengage, and some defendants game the system in hopes that cases will collapse under their own weight.

Government Action and Technology as an Enabler

Leveson argued that the government must pull every lever available: structural reform, investment in capacity, and a focus on efficiency across policing, prosecution, defence, courts, and prisons. Technology – and AI in particular – featured as an enabler that can be applied across the full lifecycle of a case.

Safeguards for AI Adoption

Leveson emphasized that AI adoption should not be unfettered. Safeguards are crucial to ensure that AI strengthens rather than undermines core principles of justice. His headline principle was simple and repeated: AI should augment human decision-making, not replace it.

Drivers of Delay and the Growth of Digital Evidence

Leveson attributed delays to familiar drivers such as long-term funding constraints, COVID disruption, and industrial action. However, he also highlighted an accelerating factor: the growth and complexity of digital evidence. This explosion of digital data has changed what investigators collect, what prosecutors must review, what the defence must search, and how disclosure is managed.

AI in Early Stages of a Case

Leveson’s most concrete AI examples sit early in the process. He described a role for technology in reducing police bureaucracy, supporting evidence preparation, and assisting with redaction and file-building. AI can support CPS decision-making and disclosure, particularly in searches across unused material.

While AI does not solve evidential judgment, it compresses the mechanical steps around review and retrieval, helping cases come to court ready to be heard.

Digital Case Management

A second cluster of recommendations focused on listing and case progression. Leveson supported a national listing framework in the Crown Court and argued for a digital case management system underneath it, enabling real-time dashboards and scheduling within a common platform.

He welcomed the pilot of a digital listing tool in Preston and Isleworth, and recommended a digital, interactive case progression system for the magistrates’ and Crown Courts to support effective progression and accountability for compliance with directions.

Improving Courtroom Processes

Leveson advocated for greater remote participation in court processes, suggesting that preliminary Crown Court hearings should default to judges in court while other parties appear remotely. This approach responds to wasted hours in court and transport constraints as remand populations rise.

Governance and AI Literacy

The Q&A section highlighted governance issues. Leveson pointed to the failure of HMCTS Common Platform and expressed support for APIs that link systems together. He warned against over-commitment to tools that do not communicate effectively and referenced the Post Office Horizon experience as a cautionary tale.

On AI literacy, Leveson treated capability as an operational risk and supported the idea of a wider AI task force across the system. He emphasized that the courts should not use AI linked to the internet for tasks like preparing case summaries; instead, he envisioned a closed system where AI receives evidence and produces summaries that are then checked by humans.

Conclusion

Leveson’s emphasis on AI was pragmatic rather than speculative. He presented technology as a way to relieve operational pressure while advocating for coordinated governance, interoperable systems, and baseline AI literacy across agencies to prevent repeating past technology failures. He made it clear: AI is not a futuristic add-on but a core aspect of future court systems, designed to support, not substitute, legal decision-making.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...