The judiciary’s revised AI guidance (31st October 2025) is concise, clear, and deliberately conservative: AI can assist, but human accountability is non-negotiable. The document supersedes the April version and sets out what judges and court staff may safely use AI for, what they should avoid, and the verification and confidentiality standards expected across the system.
What the Guidance Actually Says
- Treat public AI as public. Do not paste confidential or private information into public chatbots. Even with “chat history” disabled, assume anything you enter could be disclosed. The guidance advises disabling history where possible and refusing unnecessary device or app permissions. If confidential or personal data is inadvertently disclosed, report it as a data incident in accordance with the Data Protection Act 2018 and Judicial Office protocols.
- Verify everything. AI outputs may be inaccurate, out of date, or “hallucinated”—including non-existent cases, misstatements of law, or invented quotations. Any legal proposition or citation derived from AI must be checked against authoritative sources before use. This is essential to comply with the duty to the court and to avoid misleading the tribunal.
- Personal responsibility remains with the human. Judicial office holders are personally responsible for all material produced in their name and must read the underlying documents. AI cannot replace direct judicial engagement with evidence or legal argument.
- No blanket duty to disclose AI use. There is generally no obligation for representatives to inform the court that AI was used, provided it is used responsibly and all outputs are verified. However, context matters, and judges may ask how accuracy was ensured.
Where AI Helps—and Where It Doesn’t
Potentially useful (with human review): Summarising long texts, drafting presentation outlines, and administrative tasks such as prioritising emails or drafting memoranda.
Not recommended: Legal research to find new information that cannot be independently verified, and substantive legal analysis. The guidance is clear: AI may accelerate administrative work, but it must not be treated as a source of legal truth.
Red Flags Courts Will Watch For
The guidance lists practical indicators that a submission may be AI-generated or unreliable. Judicial scrutiny is likely where there are:
- Unfamiliar or US-style citations,
- American spellings or terminology,
- Superficially polished but substantively incorrect analysis,
- Phrases such as “as an AI language model…”,
- Signs of “white text” (hidden prompts or instructions embedded in documents),
- Or suspicious digital media (deepfakes).
Why This Landed Now: The Ayinde Warning
Earlier this year, the Divisional Court addressed AI misuse directly. In R (Ayinde) v London Borough of Haringey and the linked Al-Haroun v QNB case, the court considered an application for wasted costs after lawyers relied on multiple fake authorities that could not be produced when requested. The court’s message was unequivocal: those who use AI must check accuracy against authoritative sources before advising clients or submitting material to the court.
What This Means in Practice
For Judges and Judicial Staff
- Use only secure work devices; obtain HMCTS service manager approval where required.
- If AI may have been used by parties—especially litigants in person—ask what checks were performed and remind them they are responsible for their submissions.
- If AI assists with administration or summaries, ensure the judge still reads the primary material.
For Legal Representatives
- Adopt a “source-first” workflow: start with primary sources. If AI is used for drafting or ideas, re-trace every proposition to an authoritative source and retain an audit trail.
- Never paste client-confidential content into public tools; disable chat history; refuse unnecessary device or app permissions. Escalate and log any inadvertent disclosure as a data incident.
- Be prepared to explain verification: if asked, be able to show how each citation and legal statement was checked.
For Litigants in Person
- Use AI only to help summarise or organise; do not rely on it for legal research or analysis. Check every legal point against official sources (e.g., legislation.gov.uk, caselaw.nationalarchives.gov.uk) before filing. The court may ask what checks you performed.
A Minimal, Court-Ready Verification Protocol
- Identify the claim (e.g., “X is the test for strike-out”).
- Find the source (primary legislation or a named judgment from an authoritative database).
- Match the wording (does the source actually say this?).
- Check jurisdiction and currency (England & Wales, current law).
- Record the citation (neutral citation + pinpoint). If any step fails, do not submit the point.
This protocol aligns with the guidance’s accuracy and accountability standards and does not create a new disclosure burden.
What to Implement This Week (Firms, Chambers, In-House)
- Written AI policy: Define permitted uses (admin/summarisation), prohibited uses (research/analysis), and breach reporting.
- Citation gate: No filing unless every legal point has a verified primary source attached.
- Red-flag training: Teach teams to spot US citations, “too-polished” errors, and hidden text artefacts.
- Device controls: Block unnecessary app permissions; default to history-off; isolate public AI from client data.
The Line the Guidance Draws
The judiciary’s position is pragmatic: AI may speed up administrative work, but it must never dilute human responsibility or the integrity of proceedings. There is no general duty to disclose that AI was used, but there is an absolute duty to ensure that what reaches the court is accurate, appropriate, and secure. If in doubt, read the source, cite the source, and stand behind it.
Primary documents: Courts and Tribunals Judiciary, Artificial Intelligence (AI) – Judicial Guidance (October 2025). Divisional Court’s discussion of AI misuse: R (Ayinde) v London Borough of Haringey ; Al-Haroun v QNB .
Note: This article is for informational purposes only and does not constitute legal advice.
Disclaimer: All references to legal authorities are provided for context. Users should verify all citations and consult the official text or a qualified legal professional before relying on any authority.

