In Depth: Put AI on Your Risk Agenda
Reports have emerged of lawyers utilizing AI-generated cases and inputting client details into open generative AI systems, prompting the attention of regulators and courts. This week, solicitors were made aware that such actions have significant repercussions for their firms, especially as they approach the renewal of their professional indemnity insurance.
One firm owner expressed surprise at the number of questions on this year’s renewal form concerning AI use. Insurers are now keenly interested in understanding a firm’s policies, risk plans, and the extent of staff engagement with AI technology.
The Changing Landscape of Legal Practice
Insurers, along with compliance professionals, agree that AI has become a critical element in assessing a firm’s risk profile. However, the legal profession may not have fully grasped the necessity of managing and supervising AI use effectively.
Last year, High Court judge Mr. Justice Ritchie commented on the “appalling professional misbehaviour” exhibited by solicitors and barristers using fake case citations. Recently, two immigration solicitors were referred to the Solicitors Regulation Authority for employing generative AI to produce irrelevant or false cases. One even admitted to submitting emails containing client details into ChatGPT.
AI Management in Law Firms
At a recent risk and compliance conference held by the Law Society, a poll revealed that 14% of delegates believed AI was “allowed but largely unmanaged.” Moreover, nearly half of the attendees felt that managing AI use was primarily the responsibility of individual fee-earners, while only 24% believed it fell to supervising or managing partners.
Arjun Rohilla, a senior vice president at broker Paragon, noted that the results from the poll would be “frightening” for professional indemnity insurers.
Insurers’ Expectations
Insurers are now asking law firms to clarify their AI policy during the renewal process. Marc Rowson, a partner with insurance broker Lockton, emphasized that underwriters are not trying to trap firms but rather want them to embrace AI responsibly.
Rowson outlined three key areas insurers are interested in regarding AI use:
- Accuracy of the work being performed
- Data security and precautions taken
- Human oversight of verification and security
The insurance market is still in a fact-finding stage, and the core question remains whether firms have a comprehensive risk policy concerning AI.
Future Guidance and Responsibilities
The Solicitors Regulation Authority (SRA) is anticipated to release new guidance on the safe and compliant use of AI shortly. This guidance is expected to clarify the rules related to generative tools while reaffirming that client confidentiality, privilege, and consent are non-negotiable. The responsibility for maintaining these standards still lies with solicitors.
Olivier Roth, SRA policy manager specializing in AI and technology, stated that generative AI should be viewed as a tool to enhance professional judgment rather than replace it.
Fundamentals of Risk Management
Experts suggest that to satisfy insurers, law firms must focus on the fundamentals and provide clarity on how they manage risks related to AI. Eloise Butterworth, head of risk and compliance at consultancy HiveRisk, cautioned that some firms might get too caught up in innovation without establishing a solid risk framework.
Butterworth pointed out that insurers will want to know whether firms have an AI policy and if the Compliance Officer for Legal Practice (COLP) has contributed to this policy. It is crucial for AI use to be a part of the risk team’s agenda rather than solely the responsibility of IT and innovation teams.
In conclusion, having an effective AI policy is more important than simply having one. Insurers are likely to be more concerned about firms that completely ban AI use, as this may lead to unregulated staff usage without any guardrails in place.