ChatGPT fails in court

Sunira Chaudhri

Sunira Chaudhri

Toronto Employment Lawyer

Protecting our courts from the unchecked dangers of artificial intelligence is of primary importance.

As long we need judges and lawyers in our courtrooms, one would think the legal profession has some time before being totally disrupted by artificial intelligence upheaval.

 

How wrong that assumption appears to be.

 

In a New York courtroom, lawyers have started to look for an upper hand by invoking artificial intelligence platforms to do legal research and prepare court briefs. In a recent case, covered by the New York Times, it is clear that ChatGPT isn’t yet quite capable to perform the role of a junior associate.

 

In an article called, “Here’s What Happens When Your Lawyer Uses ChatGPT,” Benjamin Weiser tells the story of a plaintiff, Roberto Mata (“Mr. Mata”), who sued Avianca Airlines after being struck on the knee by a metal serving cart while on a flight from El Salvador to New York in August 2019.

 

Mr. Mata sued the Airline via his lawyers at Levidow, Levidow & Oberman. In response, Avianca sought the claim to be dismissed, arguing Mr. Mata commenced his claim too late.

 

In response, Mr. Mata’s lawyers filed a legal brief with the court, drafted with the help of ChatGPT, claiming Mr. Mata had a strong legal position for his claim to continue. The problem is, the cases referenced in the brief were invented by ChatGPT.

 

Soon after the brief was filed, Avianca informed the court it could not locate a single authority or quotation cited in the brief. Mata’s lawyers then filed a compendium of the actual cases referred to in the brief, but these too were found to bogus.

 

That’s when Mata’s lawyer Steven A. Schwartz admitted to using ChatGPT to prepare the brief.

 

In an affidavit filed with the court Shwartz said he “deeply regrets” using the platform to prepare his legal research and that he found ChatGPT to be “a source that has revealed itself to be unreliable.”

 

In an order from the court addressing the fictional case brief, Judge P. Kevin Castel wrote that the situation was unprecedented in that he had been presented with a legal submission filled with “bogus judicial decisions, with bogus quotes and bogus internal citations.”

 

Lucky for us, Judge Castel and Avianca’s lawyers conducted a diligent review of the case law being advanced by the plaintiff. If the ChatGPT brief was somehow able to slip through the cracks, our entire legal system would be left in a perilous state. Precedent after precedent would be replaced by bot-generated, fabricated cases that don’t accurately represent legal issues tackled by our courts.

 

Not only did ChatGPT fall flat in this New York case, but it has caused serious embarrassment and possible future discipline to the lawyers that used it.

 

The law, at its core, is people-based. It has, and forever should be, people representing people. Precedent is built by parties being judged by a jury of their peers or a judge weighing the evidence.

 

This case tells us that protecting our courts from the unchecked dangers of artificial intelligence is of primary importance.

 

Like I said last week, ChatGPT is not your lawyer!

 

Have a workplace issue? Maybe I can help! Email me at sunira@worklylaw.com and your question may be featured in a future column.

 

The content of this article is general information only and is not legal advice.

More In The News