OpenAI’s ChatGPT, a language model trained to generate human-like responses, has been scrutinized after falsely accusing several law professors of sexual harassment. The accusations were made in response to a request by Eugene Volokh, a professor at the University of California at Los Angeles School of Law, who asked ChatGPT to list law professors who had sexually harassed someone. The Washington Post reported that one of the falsely accused professors, Jonathan Turley, was shocked to learn of the accusation, stating it was “incredibly harmful.”
The incident has raised questions about the potential liability of OpenAI for defamatory statements made by ChatGPT. Eugene Volokh is reportedly writing a law review article examining whether the creators of ChatGPT could be sued for libel. The Washington Post noted that one potential issue is whether OpenAI could avoid liability under Section 230 of the Communications Decency Act, which protects online publishers from suits based on third-party content. Another issue is whether a plaintiff could show reputational damage from a false assertion.
Cartoonist Ted Rall has also considered suing ChatGPT after it falsely claimed that he had been accused of plagiarism. Rall spoke with experts about the possibility of a suit, with Laurence Tribe, a professor emeritus at Harvard Law School, stating that it shouldn’t matter for purposes of liability whether a human being or a chatbot generates lies. However, a defamation claim could be complex for a public figure, who would have to show actual malice to recover, according to RonNell Andersen Jones, a professor at the University of Utah S.J. Quinney College of Law. Jones suggested that a product-liability model may be more appropriate than a defamation model for such cases.
Volokh’s request for feedback on the libel issue online prompted many to argue that ChatGPT’s assertions should not be treated as factual claims because they are the product of a predictive algorithm. However, Volokh argued that in libel cases, the critical inquiry is whether the challenged expression, however, labeled by the defendant, would reasonably appear to state or imply assertions of objective fact. He noted that OpenAI had touted ChatGPT as a reliable source of assertions of fact, not just as a source of entertaining nonsense and that its credibility for producing reasonably accurate summaries of the facts is crucial to its current and future business model.
Put your legal career in the hands of experts – submit your resume to BCG Attorney Search today.
In response to the incident, a spokesperson for OpenAI told the Washington Post that improving factual accuracy is a significant focus for the company and that when users sign up for ChatGPT, OpenAI strives to be as transparent as possible so that it may not always generate accurate answers. The incident highlights the potential legal implications of using artificial intelligence and the need for developers to ensure that their products do not generate false or defamatory statements.