ChatGPT lawyer on the ropes in court

Sunira Chaudhri

Sunira Chaudhri

Toronto Employment Lawyer

Instead of doing the research on his own, Steven A. Schwartz had ChatGPT conduct his legal research and even write the legal brief for him, which he then filed at a Manhattan courthouse in support of an upcoming motion.

Last week, I recounted the story of a lawyer who was discovered by a Federal District Court in New York to have filed a legal brief generated by ChatGPT.

 

Instead of doing the research on his own, Steven A. Schwartz had ChatGPT conduct his legal research and even write the legal brief for him, which he then filed at a Manhattan courthouse in support of an upcoming motion.

 

An article in the New York Times relayed the details of Mr. Schwartz’s court attendance this week, when he was called upon by Judge P. Kevin Castel to respond to pointed questions about the fictitious brief.

 

This week, Judge Castel is considering whether to impose sanctions on Mr. Schwartz and another firm lawyer, Peter LoDuca, whose name was also on the brief.

 

The New York Times’ article described that during the attendance, Judge Castel “lifted both arms in the air, palms up, while asking Mr. Schwartz why he did not better check his work.

 

Despite showing complete and utter remorse for using ChatGPT to file his brief, the fact remains that Mr. Shwartz has placed his reputation and work ethic in the balance.

 

Any lawyer or employee who uses artificial intelligence to create a shortcut to perform their work is playing a dangerous game.

 

Firstly, most employees are not transparent about using ChatGPT to create work product. Employers expect employees to do their own work. If the bulk of your work is AI-generated, you may easily find yourself in a similar position as Mr. Schwartz, unable to defend the results produced.

 

Not having the wherewithal to speak to work you have signed your name to, because you have no idea where it has come from, is deeply problematic.

 

The second issue with using ChatGPT at work is that your work product is likely often inaccurate. ChatGPT and other AI platforms scrape the internet, but the systems do not necessarily understand context or the nuance that humans would.

 

In the case of Mr. Schwartz, ChatGPT invented cases that have never been heard in any courtroom. The legal opinions offered were pure fantasy. Reading one excerpt from the fictitious brief in court this week, Judge Castel asked, “Can we agree that’s legal gibberish?”

 

Third, if paying clients learn you are using AI to complete their work, especially at a high hourly rate, the likelihood of being retained again drops dramatically. Clients hire people for their expertise and innovation. Using AI in that context is a race to the bottom.

 

Lastly, now that Mr. Schwartz and others at his law firm, Levidow, Levidow & Oberman, have admitted to using ChatGPT, instead of doing their own research, the firm could be left wondering what value it’s extracting from employees and lawyers who use ChatGPT.

 

Employees who use ChatGPT at work are walking the plank. Using AI to produce substantive content (in contrast to minor administrative tasks), creates doubt for your ongoing employment.

 

If employees use ChatGPT to complete significant work-related tasks, they are begging employers to invest in AI and not employees.

 

Have a workplace issue? Maybe I can help! Email me at sunira@worklylaw.com and your question may be featured in a future column.

 

The content of this article is general information only and is not legal advice.

More In The News

ChatGPT lawyer on the ropes in court

Instead of doing the research on his own, Steven A. Schwartz had ChatGPT conduct his legal research and even write the legal brief for him, which he then filed at a Manhattan courthouse in support of an upcoming motion.

Read More