Talk of “robo-lawyering” has dominated headlines, yet – so far – no judge or barrister in England & Wales has been proven to rely on ChatGPT-style tools in court. Across the Atlantic, however, attorneys are already being fined for filing briefs laced with AI-hallucinated law, and judges are insisting on formal AI-disclosure certificates.
Why the Judiciary is Suddenly Talking About Algorithms
Since the Courts & Tribunals Judiciary issued its Artificial Intelligence Guidance for Judicial Office Holders in December 2023 (updated April 2025), British judges have been reminded that public chatbots “may make up fictitious cases, citations or quotes” and that any AI output “must be independently checked.” Confidential material is barred from public tools, and responsibility for every ruling “remains personal to the judge.” The guidance pays particular attention to hallucination—the tendency of large language models to invent authority that looks plausible but is entirely false—a risk, it warns, that could “threaten the integrity of the administration of justice.”
England & Wales – Suspicion, but No Proven Professional Use
Despite growing concern, there is no reported case in England & Wales where a judge or barrister has been proven to rely on generative AI tools in court. The judiciary’s guidance, as referenced in The King on the application of Frederick Ayinde v The London Borough of Haringey [2025] EWHC 1383 (Admin), catalogues actual or suspected use by lawyers of generative AI and laments a new “epidemic” of bogus citations. However, in the most prominent incidents, courts have found that the misconduct lay in submitting unchecked or fabricated material—regardless of whether AI was the source.
The only confirmed instance of generative AI use in UK courts involves a litigant in person. In Zzaman v HMRC [2025] UKFTT 00539 (TC) , the taxpayer candidly told the First-tier Tribunal he had asked an AI tool to find supportive cases. The tribunal called the attempt “logical and reasonable” for a non-lawyer, yet dismissed the appeal because the cited authorities were irrelevant or misapplied—a cautionary tale, it said, of how plausible-sounding AI output can mislead the untrained.
There is no verified public record of Lord Justice Birss using ChatGPT or generative AI in preparing a judgment. While there have been reports of judicial interest in AI, no official or public statement confirms such use in a formal judgment.
In short, British judges have erected cautionary fences before the horse has bolted: guidance is in place, regulators are on alert, but no reported judgment shows a professional yet relying on AI output in court.
United States – Sanctions, Show-Cause Orders and AI Certificates
If the UK judiciary is building the fence, US courts are busy shutting stable doors that have already been kicked off their hinges.
Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023) – the watershed case in which two New York lawyers were fined $5,000 for submitting six fictitious precedents generated by ChatGPT. Judge Kevin Castel condemned their “subjective bad faith” and warned of the “harms [that] flow from the submission of fake opinions.”
Other recent US cases have seen similar issues, but only cite verified cases to avoid confusion. Do not rely on unverified or hypothetical cases.
Fed-up judges have moved from case-by-case penalties to standing orders. For example, Judge Brantley Starr of the Northern District of Texas now requires every filer to certify whether generative AI was used and, if so, to confirm that a human has verified each citation. Similar disclosure rules have appeared in other federal courts, including the Eastern District of Pennsylvania, the Northern District of Illinois, and the Court of International Trade, with commentators predicting they will become the US norm.
What is Driving the Divergence?
Timing and culture: US litigators embraced ChatGPT with early enthusiasm; the UK bar, governed by tight collegiate oversight and a tradition of personal liability for pleadings, has been slower to experiment openly.
Regulation first vs. regulation after: The English judiciary produced guidance before any proven AI debacle; American courts reacted to malpractice already on the docket.
Disclosure: No English court yet demands an AI certificate. US judges, stung by hallucinations, increasingly insist on upfront declarations.
Yet the underlying concerns are identical: hallucination, hidden bias, data-privacy breaches, and the erosion of transparent reasoning.
The Next Fault-Lines
- Mandatory disclosure in the UK? Senior judges have hinted that advocates may soon be asked, on the record, whether AI assisted their submissions—mirroring US practice.
- Professional insurance and audits: Chambers and law firms are drafting policies that log prompts, store outputs, and require partner sign-off before any AI-drafted text leaves the building.
- Ethical rules: The Solicitors Regulation Authority (SRA) has not issued AI-specific rules, but its Codes of Conduct require solicitors to ensure accuracy, avoid misleading the court, and maintain professional standards when using AI. The SRA’s approach is principle-based, and further guidance may follow as AI use increases.
- Public confidence: Senior judiciary, including the Master of the Rolls, have warned that while AI can speed justice, “litigants must be sure a human judge remains accountable.” Failure to honour that principle risks undermining the rule of law.
Conclusion: Keep the Human Hand on the Tiller
For now, Britain’s courts have escaped the “hallucinated brief” scandals that plague US dockets—by the skin of their teeth. The first confirmed UK AI user was an unrepresented taxpayer; the first confirmed US user was a lawyer who landed a headline fine.
Both jurisdictions are converging on the same message: AI may draft, summarise, or search, but it cannot shoulder legal responsibility. Until large language models can guarantee factual accuracy—and disclose their working—judges will keep justice human. Lawyers who forget that lesson risk joining the growing roll-call of sanctions, costs orders, and professional referrals that already define the American experience.
John Barwell is founder of Legal Lens and an advocate for Litigants-in-Person. He writes on systemic bias, access to justice, and the ethical deployment of AI in law.