Artificial intelligence has moved far beyond writing emails and generating images; it’s now being used in courtrooms. Across the US, judges are experimenting with generative AI tools such as ChatGPT and Claude to speed up legal research, summarize cases, and even draft rulings.
For an overburdened justice system, the promise is obvious: faster decisions, reduced paperwork, and less backlog. But there’s a catch: AI can be confidently wrong. And in the legal world, a mistake from a judge isn’t just an error. It becomes the law.

When AI Gets It Wrong in Court
We’ve already seen AI embarrass lawyers who submitted fake case citations that it “hallucinated.” Even AI experts have been caught out, like a Stanford professor who accidentally included false information in sworn testimony.
Closer to home, South Africa recently had its own AI courtroom scandal. In the case Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator, a junior advocate used an AI tool called Legal Genius to help draft urgent court arguments. Acting Judge DJ Smit discovered at least four completely fictitious case citations. The advocate admitted they were likely generated by AI, and the matter was referred to the Legal Practice Council for investigation.
Whether in New York or Johannesburg, the pattern is the same: AI can produce convincing but false legal references, and those mistakes can slip through until it’s too late.
Judges Testing the Limits
Some judges see AI as a useful assistant, not a replacement for human judgment.
Judge Xavier Rodriguez in Texas uses AI to:
- Summarize cases.
- Identify key players.
- Create timelines.
- Draft potential questions for lawyers before hearings.
To him, these are “low-risk” tasks that still allow him to check for errors before anything is recorded. But he draws a clear line: AI should never decide things like bail eligibility, where human judgment and discretion are essential.
In California, Judge Allison Goddard takes a similar approach, treating AI as a “thought partner.” She uses it to organize messy documents, summarize lengthy rulings, and brainstorm. However, she avoids it entirely in criminal cases, where bias and error could have serious consequences. She also prefers AI tools that don’t train on user data to protect confidentiality.
The Blurry Line Between Safe and Risky AI Use
Researchers warn that the line between “safe” and “unsafe” AI use in court is hard to define.
Professor Erin Solovey of Worcester Polytechnic Institute notes that even simple AI tasks, like summarising a legal document, can produce drastically different results depending on the model’s training. A timeline of events may sound convincing, but it can be factually incorrect.
To help clarify the risks, the Sedona Conference published guidelines for judges in early 2025. They recommend using AI for legal research, transcripts, and document searches while stressing that all AI outputs must be verified. Crucially, no current AI tool has solved the “hallucination problem.”
The Crisis Waiting to Happen
Louisiana’s Judge Scott Schlegel warns that AI-assisted judicial mistakes could trigger a “crisis waiting to happen.” Unlike lawyers, judges don’t have to explain their reasoning in detail, and when they make a mistake, it becomes legally binding until overturned.
In high-stakes situations like child custody or bail decisions, the consequences of relying on faulty AI could be devastating. Schlegel believes AI can help with small, routine tasks, but that the core job of a judge, thinking through complex decisions from scratch, should never be outsourced.
Why You Should Care
Generative AI could modernise courts and speed up justice. But in a system where every word matters, trusting a tool that sometimes invents information is dangerous.
The public already struggles to trust the legal system. Imagine finding out that a custody decision, a bail ruling, or a criminal sentence was influenced by a chatbot that got its facts wrong. Would you still believe justice was served?
The tech is here to stay, but for now, it should be treated like a junior assistant: fast, useful, and creative, but never trusted without human oversight. In law, as in life, speed means nothing without accuracy.
Reference: Adapted from James O’Donnell, “Meet the early-adopter judges using AI,” MIT Technology Review, August 11, 2025, and reports on the Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator case, iafrica.com & ITWeb.