Ninety-two per cent of UK legal professionals now use at least one AI tool in their daily work. The SRA knows this. At its February 2026 webinar, it said plainly that firms without documented AI governance are increasingly visible to regulators. So the question is no longer whether your firm uses AI. The question is whether you can demonstrate that how you use it is compliant.
This guide answers that question directly. It covers the SRA rules that govern AI use, where the genuine compliance risks live, how to assess the tools your firm is already using, and what the three deployment models look like in practice — including the one that eliminates data sovereignty risk entirely.
One important framing point before we start: the SRA has not created a separate AI compliance framework. It has been explicit that existing professional obligations apply to AI-assisted work without modification. This is useful, because it means the compliance question resolves to a set of principles you already know. It also means there is no ambiguity to hide behind.
The Regulatory Landscape in 2026
The SRA's formal AI engagement began in earnest with its October 2024 guidance, which confirmed that the SRA Standards and Regulations apply fully to AI-assisted legal work. No new rules. No exemptions. The same standards, applied to new technology.
Since then, the regulatory drumbeat has accelerated. In February 2026, the SRA held a dedicated webinar on AI policy and regulation. SRA Policy Manager Olivier Roth outlined the distinction between traditional AI and generative AI, highlighted the hallucination risk (AI-invented case citations have already appeared in UK court bundles), and confirmed that the COLP — not the individual fee earner who clicked a button — carries responsibility for ensuring firm-wide AI governance meets regulatory standards.
The SRA also announced that it would publish a Generative AI FAQ and a Good Practice Note on AI use and client data in 2026, and that its own commissioned research — due in April 2026 — shows roughly a third of the public has already used generative AI to help identify legal issues. The regulatory attention is building. Firms that have not yet formalised their AI approach are running out of time to do so quietly.
The SRA Principles That AI Touches Directly
The SRA Principles are not aspirational statements. They are the threshold below which professional conduct becomes a regulatory matter. Four of them are directly implicated by AI use.
Principle 2 — Integrity
AI generates confident, fluent output that is sometimes entirely wrong. In the Mata v. Avianca case in the United States, a lawyer submitted AI-generated case citations that did not exist. The court found them. The citations read as plausible; they were fabricated. This is not a distant risk — UK practitioners have submitted AI-generated bundles containing invented references. Submitting AI output without verification is not just sloppy; under Principle 2, it is an integrity failure. The tool hallucinated. The solicitor signed off without checking. That is the professional's failure, not the software's.
Principle 4 — Honesty
The duty not to mislead courts, tribunals, and regulators is absolute. When AI drafts a skeleton argument, a witness summary, or a letter of advice, the solicitor who approves and submits it is representing its accuracy. There is no AI caveat that discharges this duty. If an AI-drafted document contains a misstatement and it goes to a client or a court, that is a Principle 4 issue for the responsible solicitor.
Principle 6 — Client Confidentiality
This is where most firms are most exposed. Principle 6 requires solicitors to keep client affairs confidential. SRA Code 6.3 extends this: confidential information must be disclosed only with the client's informed consent or as required by law. When client documents are uploaded to a public cloud AI tool, that data travels to servers outside the firm's control. Depending on the provider's terms, it may be used for model training. It is potentially subject to foreign law enforcement access. The client almost certainly did not consent to any of this. This is the compliance gap that matters most, and it is operating right now in firms across the UK that have not reviewed their AI tool terms carefully.
Competence — Code 3.4
Code 3.4 requires solicitors to maintain competence and keep their professional knowledge and skills up to date. The SRA has been clear that competence now includes understanding the AI tools you use — how they work, what their limitations are, and where their output should not be trusted without independent verification. A solicitor who cannot explain what an AI tool did, or who relies on its output without the capacity to check it, is not meeting the competence standard. This is a skills obligation, not just a process one.
The COLP's Role: Governance Before Deployment
The SRA's position on the COLP's responsibilities around AI is clear, and it is worth stating explicitly because many firms are getting this sequencing wrong.
The SRA expects the COLP to be briefed before any AI tool trial begins, not after deployment. The COLP must satisfy themselves on four questions before sign-off: What does the tool do with client data? Where is that data processed and stored? Are outputs reviewed by a qualified solicitor before use? What governance arrangements and audit trails are in place?
Deploying an AI tool at practice area level without COLP sign-off is a governance failure. So is allowing individual fee earners to use personal AI subscriptions for client work without approval. The February 2026 SRA webinar made clear that firms without documented AI governance are increasingly detectable — in part because the public research the SRA commissioned shows clients are now arriving with AI-generated views of their situation, making it obvious which firms have structured approaches to AI and which do not.
The Data Sovereignty Problem: Where Most Firms Are Exposed
Data sovereignty is the practical compliance failure most likely to affect a firm that has adopted AI tools without formal governance. It operates through two independent legal mechanisms, either of which creates a material risk.
UK GDPR: The Data Processing Agreement Gap
When an AI tool processes personal data contained in client documents, the firm is acting as a data controller and the AI provider is a data processor under Article 28 of the UK GDPR. This requires a written Data Processing Agreement to be in place before any processing begins. The DPA must specify the nature of the processing, the technical and organisational security measures, the data retention terms, and the processor's sub-processor obligations. Most ad hoc AI subscriptions — the ones individual fee earners sign up for using a firm email address — do not come with a DPA. Processing personal data without one is a UK GDPR breach, independent of any SRA obligation.
The US CLOUD Act: A Risk Most Firms Have Not Assessed
If client data is processed on US-based infrastructure by a US-headquartered company, the Clarifying Lawful Overseas Use of Data Act 2018 allows US law enforcement to compel disclosure of that data, regardless of where it physically sits. The data does not need to be on a US server; the company merely needs to have practical access to it. For matters involving commercial litigation strategy, regulatory investigations, or sensitive M&A negotiations, this is a risk that most fee earners have not considered when reaching for a ChatGPT or Copilot subscription. It is not theoretical. It is a structural feature of using US-based cloud AI for UK legal work.
AI Risk by Use Case: A Practical Assessment Framework
Not all AI use carries the same compliance weight. The risk profile depends on what data the AI is processing and what the output is used for. The table below provides a starting framework for categorising common legal AI use cases by risk level.
| Use Case | Data Involved | Risk Level | Key Obligation |
|---|---|---|---|
| Legal research (public sources only) | None — public information | Low | Verify all citations before use (Code 3.4) |
| Drafting precedents from public templates | None — generic content | Low | Human review before sending (Principle 2) |
| Summarising client correspondence | Confidential client communications | High | DPA required; data must stay on firm infrastructure |
| Document review — due diligence | Commercial confidential documents | High | Data sovereignty; CLOUD Act exposure if US provider |
| Drafting witness statements | Personal data, potential special category data | High | UK GDPR Article 9 for special category data; DPIA may be required |
| Internal admin (meeting notes, billing) | Internal data, limited client exposure | Medium | Minimise client data in prompts; DPA still required |
| Court bundle preparation | Privileged material, personal data | High | Hallucination risk is acute; mandatory human review before filing |
This table is a starting point, not an exhaustive risk register. Regulated practice areas — financial services, immigration, family law — carry sector-specific obligations that sit on top of this baseline. The ICO also publishes guidance on data protection in legal services contexts that is worth reviewing alongside SRA obligations.
Three Deployment Models: Matching Architecture to Risk
The data sovereignty risk is real, but it is not insuperable. The correct response is matching your deployment architecture to your risk profile. There are three models in common use.
Model 1: SaaS AI with Properly Negotiated Controls
Tools like Microsoft 365 Copilot, Harvey AI, or Lexis+ AI can be used compliantly when properly configured. The requirements are non-negotiable: a signed Data Processing Agreement, confirmed UK or EEA data residency (contractually guaranteed, not just claimed in a marketing brochure), model training opted out, and appropriate access controls. The COLP should review the DPA personally before sign-off. These tools are appropriate for many use cases. They are not appropriate for the most sensitive matters, because data still leaves the firm's infrastructure and sits within a third-party contractual relationship. They also require ongoing monitoring — terms of service change, and what was compliant in 2024 may not be compliant in 2026.
Model 2: Private Cloud with UK Data Residency
Some providers offer dedicated cloud tenants that keep data within UK data centres under the firm's own administrative control. This reduces third-party exposure meaningfully. The CLOUD Act risk remains a question depending on the provider's corporate structure. This model suits mid-size firms that need cloud scalability but want stronger data isolation than shared-tenant SaaS provides. Compliance depends heavily on the contractual terms and the provider's corporate structure — a UK-based subsidiary of a US parent may still carry CLOUD Act exposure.
Model 3: On-Premises Air-Gapped AI
The cleanest compliance architecture for law firms handling sensitive client data. All AI processing happens on hardware physically located within the firm's own premises. No data leaves the network. No third-party access is possible by design. When a client asks "where is my data?", the answer is "in our building." For firms handling high-value litigation, regulatory investigations, M&A, or sensitive personal matters, this is the model that eliminates the data sovereignty question rather than managing it.
Nerdster Vault is built on this architecture. It runs entirely on-premises with no external connectivity required — we demonstrate this during every deployment by removing the network connection while the system continues to operate. Document review, matter summarisation, precedent assistance, and compliance checking all run locally. For firms where client confidentiality is a structural requirement rather than a box-ticking exercise, on-premises AI is what SRA compliance actually looks like in practice.
For firms that want AI assistance without the infrastructure investment, Nerdster Vault also offers a cloud-hosted option purpose-built for UK law firms, with UK data residency, DPA in place, and model training opted out as standard. It is not air-gapped, but it is designed from the ground up for SRA-regulated practice rather than adapted from a consumer product.
The Pre-Deployment Checklist: Eight Questions to Answer Before Going Live
Before any AI tool is used for client-facing work, the COLP should be able to answer yes to each of the following. This is not a compliance shortcut — it is a minimum threshold.
- COLP sign-off obtained — The COLP has reviewed the tool's data practices and given documented approval before deployment began, not after.
- Data Processing Agreement in place — A signed DPA exists under Article 28 of the UK GDPR. It specifies the processing purpose, security measures, retention, and sub-processor obligations.
- Data residency confirmed contractually — UK or EEA data residency is guaranteed in the DPA or service agreement, not merely stated in a help article.
- Model training opted out — The firm's data is contractually excluded from being used to train or improve the AI provider's models.
- Use case risk assessment completed — Each use case the tool will be deployed for has been assessed against the risk framework above. High-risk use cases (client document processing) have been reviewed by the COLP and data protection lead.
- Supervision policy documented and communicated — Fee earners know that all AI output requires human review before use, and they know who holds professional responsibility for AI-assisted work on each matter.
- AI use documented in matter files — The firm's file management system captures which AI tool was used, for which task, and which solicitor reviewed the output.
- Review cycle in the diary — AI tools and regulatory guidance both change. A formal review of AI compliance is scheduled at least every six months.
If any of these eight points cannot be answered yes, the tool should not be in use for client work. The SRA's position is clear: it is not waiting for a specific AI rule to take effect before it expects firms to meet their existing professional obligations. Those obligations exist today.
Writing Your Firm's AI Policy
The February 2026 SRA webinar was unambiguous: firms without documented AI policies are visible to regulators. Writing one is not complicated, but it does require deliberate choices that reflect your firm's specific risk profile and practice areas.
An effective law firm AI policy covers six areas. First, the approved tool list — which AI tools are sanctioned for use, and for which specific tasks. Second, the data classification rules — which categories of data may and may not be processed by AI (client-confidential and legally privileged material should attract the strictest controls). Third, the supervision requirements — who must review AI output before it is used, and how that review is documented. Fourth, the matter file documentation standard — what records of AI use must be created in the case management system. Fifth, the training requirements — what fee earners must understand about AI limitations before using approved tools. Sixth, the incident response procedure — what happens when AI output contains an error that reaches a client or court.
Our AI Policy Template for UK Businesses provides a section-by-section framework that law firms can adapt. The core structure applies; the firm-specific elements — approved tool list, data classification, practice area nuances — require your own input.
"The SRA expects the COLP to be responsible for regulatory compliance when new technology is introduced. The COLP should be briefed before the trial begins, not after deployment." — SRA AI Policy and Regulation Webinar, 4 February 2026
What Comes Next: The SRA's Enforcement Direction
The SRA has signalled clearly where it is heading. The forthcoming Good Practice Note on AI and client data will set a published standard against which firms can be assessed. The Generative AI FAQ will give specific guidance on tools like ChatGPT that are already in widespread use in legal practice. The April 2026 research publication will quantify public AI use in legal contexts and almost certainly prompt further regulatory attention on client-facing AI risk.
Firms that have governance in place before these documents are published are in a materially better position than those scrambling to retrofit compliance after the fact. The firms that will face the most difficult conversations with the SRA are the ones where AI adoption happened informally, at individual fee earner level, without COLP sign-off, without data processing agreements, and without any documentation in matter files.
That is not a hypothetical scenario. It is the current state of AI use in a significant proportion of UK law firms. The question for your firm is which category it falls into — and whether the answer is one you are confident about.
If your firm is considering AI for document-heavy workflows, our guide to AI document review for law firms covers what works, what does not, and how to deploy it within SRA requirements. For a broader look at which tasks can be safely automated, see how to automate law firm work without losing fee earner control. And if client confidentiality is the primary concern driving your AI decisions, our analysis of AI and client confidentiality for UK solicitors maps the specific obligations under SRA Code 6.3.
For a structured view of where your firm currently stands, the Nerdster AI Readiness Quiz provides a 12-question assessment mapped to SRA and GDPR obligations, with scored results and specific recommendations by risk category. It takes about eight minutes and produces output you can put in front of your COLP today.
If you are at the point where you need a proper compliance review rather than a quiz, our AI Security & Safeguarding service works through your current AI tool portfolio, maps it against SRA and GDPR obligations, identifies the gaps, and produces a documented remediation plan. We have done this with law firms across practice areas from commercial property to family law. The picture is rarely as bad as firms fear — but it is almost always different from what they assumed.