In a lawsuit filed on June 5, radio host Mark Walters alleges that ChatGPT generated fake accusations while a journalist was using the chatbot to research a suit filed by the Second Amendment Foundation. Walters asserts that ChatGPT falsely claimed he had defrauded and embezzled funds from the foundation when he served as its treasurer and chief financial officer. The suit, filed in Gwinnett County, Georgia, contends that these statements were libelous per se.
According to Walters, every fact presented by ChatGPT related to him was false. He was never accused of any wrongdoing by the Second Amendment Foundation, and he has no official or employment relationship with the organization. Furthermore, ChatGPT provided the journalist with a purported copy of the Second Amendment Foundation suit, which Walters claims was a complete fabrication that bore no resemblance to the actual complaint.
The journalist, Fred Riehl, editor-in-chief of the magazine AmmoLand, sought confirmation from the Second Amendment Foundation regarding ChatGPT’s accusations, and they confirmed that the claims were false.
Legal experts have weighed in on the potential challenges Walters may face in the lawsuit. Megan Meier, a defamation lawyer at Clare Locke, explained that under Georgia law, plaintiffs who do not seek a retraction at least seven days before filing a suit are limited to actual economic losses. Since Riehl did not publish the false claims, economic damages may be restricted in this case.
Time to fill a position? BCG Attorney Search can help you find the perfect candidate.
Eugene Volokh, a First Amendment professor at the University of California, Los Angeles School of Law, commented that defamation liability typically requires proving either knowing falsehood or reckless disregard for the truth. Volokh stated that the first theory is unavailable because there is no allegation that Walters put OpenAI on notice about the false statements. The second theory is also problematic as there are no allegations of actual damages.
John Monroe, Walters’ lawyer, clarified that there has been no request for a retraction and questioned the feasibility of issuing one due to the nature of AI technology.
Another significant issue is whether OpenAI could be shielded from liability under Section 230 of the Communications Decency Act. This section protects technology companies from being held liable for third-party content posted on their platforms. However, some legal observers argue that a program like ChatGPT may fall outside the scope of this immunity.
Volokh noted that Section 230 might not shield AI companies if they materially contribute to the alleged unlawfulness. In the case of ChatGPT, by generating false and reputation-damaging accusations from text that did not contain such claims, OpenAI could be considered to have materially contributed to the created content’s unlawfulness.
While the outcome of this lawsuit remains uncertain, it highlights the complexities and legal challenges that arise with the use of AI in generating and disseminating information. As the case unfolds, it will likely provide valuable insights into AI technology’s evolving legal landscape and its potential impact on defamation law.