Leveson on AI in the Criminal Courts – Efficiency and the Limits of Automation
In remarks to the Westminster Legal Policy Forum on February 26, Sir Brian Leveson framed AI as a practical lever for tackling criminal court delays – but only with tight safeguards, improved technical capability across the system, and a preference for closed tools.
The Connection Between Delay and Technology
Sir Brian Leveson used his speech to connect two strands of the criminal courts debate that are often treated separately: the scale of delay and the operational role of technology in reducing it. This was presented as a companion to his Independent Review of the Criminal Courts, which set out 180 recommendations, including 31 focused on technology and AI as enablers of reform.
With the Crown Court backlog close to 80,000 cases, Leveson described a system where victims and defendants wait so long that confidence diminishes, witnesses disengage, and some defendants game the system in hopes that cases will collapse under their own weight.
Government Action and Technology as an Enabler
Leveson argued that the government must pull every lever available: structural reform, investment in capacity, and a focus on efficiency across policing, prosecution, defence, courts, and prisons. Technology – and AI in particular – featured as an enabler that can be applied across the full lifecycle of a case.
Safeguards for AI Adoption
Leveson emphasized that AI adoption should not be unfettered. Safeguards are crucial to ensure that AI strengthens rather than undermines core principles of justice. His headline principle was simple and repeated: AI should augment human decision-making, not replace it.
Drivers of Delay and the Growth of Digital Evidence
Leveson attributed delays to familiar drivers such as long-term funding constraints, COVID disruption, and industrial action. However, he also highlighted an accelerating factor: the growth and complexity of digital evidence. This explosion of digital data has changed what investigators collect, what prosecutors must review, what the defence must search, and how disclosure is managed.
AI in Early Stages of a Case
Leveson’s most concrete AI examples sit early in the process. He described a role for technology in reducing police bureaucracy, supporting evidence preparation, and assisting with redaction and file-building. AI can support CPS decision-making and disclosure, particularly in searches across unused material.
While AI does not solve evidential judgment, it compresses the mechanical steps around review and retrieval, helping cases come to court ready to be heard.
Digital Case Management
A second cluster of recommendations focused on listing and case progression. Leveson supported a national listing framework in the Crown Court and argued for a digital case management system underneath it, enabling real-time dashboards and scheduling within a common platform.
He welcomed the pilot of a digital listing tool in Preston and Isleworth, and recommended a digital, interactive case progression system for the magistrates’ and Crown Courts to support effective progression and accountability for compliance with directions.
Improving Courtroom Processes
Leveson advocated for greater remote participation in court processes, suggesting that preliminary Crown Court hearings should default to judges in court while other parties appear remotely. This approach responds to wasted hours in court and transport constraints as remand populations rise.
Governance and AI Literacy
The Q&A section highlighted governance issues. Leveson pointed to the failure of HMCTS Common Platform and expressed support for APIs that link systems together. He warned against over-commitment to tools that do not communicate effectively and referenced the Post Office Horizon experience as a cautionary tale.
On AI literacy, Leveson treated capability as an operational risk and supported the idea of a wider AI task force across the system. He emphasized that the courts should not use AI linked to the internet for tasks like preparing case summaries; instead, he envisioned a closed system where AI receives evidence and produces summaries that are then checked by humans.
Conclusion
Leveson’s emphasis on AI was pragmatic rather than speculative. He presented technology as a way to relieve operational pressure while advocating for coordinated governance, interoperable systems, and baseline AI literacy across agencies to prevent repeating past technology failures. He made it clear: AI is not a futuristic add-on but a core aspect of future court systems, designed to support, not substitute, legal decision-making.