Steven Schwartz, a New York lawyer, appeared before US District Judge P. Kevin Castel seeking leniency after submitting a court brief that contained fabricated legal precedents generated by ChatGPT, an artificial intelligence tool. Schwartz, who now faces potential punishment for his actions, claimed that he had no knowledge that the AI tool could produce false case citations and court opinions.
During the hearing, Schwartz admitted his failure to ensure the accuracy of the cited cases, stating, “There were many things I should have done to assure the veracity of these cases. I failed miserably at that.” This incident highlights the disruptive potential of generative artificial intelligence tools like ChatGPT, which have the capacity to revolutionize white-collar professions, including law firms and financial institutions. However, it also underscores the risks associated with over-reliance on such technology.
ChatGPT, developed by the nonprofit organization OpenAI, is a chatbot capable of engaging in human-like conversations and extracting extensive data from the internet. However, it openly acknowledges its susceptibility to producing inaccurate information and hallucinations.
Schwartz confessed that he unknowingly relied on ChatGPT to invent six cases that he referenced in a brief for a case against Avianca Airlines. The plaintiff in this case alleged that an Avianca employee had struck him in the left knee with a metal serving cart during a 2019 flight from El Salvador to New York, causing severe personal injuries. Avianca, in its defense, sought to dismiss the suit on the grounds of it being filed beyond the statute of limitations.
Put your legal career in the hands of experts – submit your resume to BCG Attorney Search today.
After Avianca’s lawyers raised concerns about the authenticity of the cited cases, Schwartz persisted in his reliance on ChatGPT, prolonging the use of the AI tool even after being made aware of the fraudulent citations. Judge Castel emphasized that the consequences of this case rest not only on Schwartz but also on his colleague, Peter LoDuca, and their actions after discovering the phony references.
During the hearing, Castel questioned Schwartz about his failure to verify the cases using legal research databases, law library resources, or even a simple Google search. Schwartz admitted to not having undertaken any such checks. The judge further inquired whether Schwartz had any suspicions regarding the validity of one of the primary fabricated cases mentioned in the brief, specifically the non-existent “Varghese v. China South Airlines Co.,” which contained nonsensical information. Schwartz responded by stating that he never imagined ChatGPT could create an entirely fictional case and that he only became aware of this possibility when Judge Castel issued an order to show cause on May 4.
Following the identification of the issues with Schwartz’s brief, federal judges in Illinois and Texas implemented standing orders mandating lawyers to certify that their filings were not generated using generative AI or that a human had reviewed the accuracy of any AI-crafted language. Judge Brantley Starr from the Northern District of Texas emphasized the potency of AI platforms while stating that legal briefing is not an appropriate use for them.
In the Avianca case, Judge Castel scheduled a sanctions hearing for Schwartz, his colleague Peter LoDuca, who signed and filed the brief, and their firm, Levidow, Levidow & Oberman, a four-lawyer Manhattan personal injury practice. Additionally, Castel raised the possibility of referring Schwartz to a state attorney grievance committee to investigate his professional conduct.
Schwartz’s defense counsel, Ronald Minkoff, pleaded with Judge Castel not to impose sanctions, highlighting the significant damage already inflicted on Schwartz and his firm’s reputations due to this incident. Minkoff argued that the public embarrassment they experienced served as a sufficient deterrent.
The hearing concluded with Judge Castel adjourning without indicating when he would announce possible sanctions. This case serves as a cautionary tale for legal professionals navigating the integration of new technologies, emphasizing the need for diligence, verification, and a critical assessment of AI-generated content within the legal field.