A lawyer with the New York-based firm Levidow, Levidow & Oberman found himself in a deep trouble after getting caught with the use of non-existing cases to cite in the defense of his client – a man suing an airline for an alleged injury.
Steven Schwartz admitted in the courtroom he had used ChatGPT to search for relevant references, presenting in his legal brief several past cases such as "Martinez vs. Delta Air Lines" and "Varghese vs. China Southern Airlines", which never actually happened, a court affidavit said in May.
Schwartz, an attorney with 30-year-old experience, told the judge, P.
Kevin Castel, that he had asked ChatGPT to verify its sources before accepting the content, and the chatbot assured that his references were real.
But neither the judge, not the prosecutors were able to locate the precedents which the defense had relied upon.
Schwartz told the court that he "greatly regrets" using ChatGPT to do his research for the case "and will never do so in the future without absolute verification of its authenticity."
His regrets, however, did not impress Judge Castel, who issued a special order in this regard.
"The Court is presented with an unprecedented circumstance. […]A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases [...] six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations."
The judge scheduled a hearing in June to discuss whether or not Schwartz should be sanctioned.
Computer scientists have spotted ChatGPT and other AI bots producing false content on numerous occasions, calling this process “hallucination.”