The US Court of Appeals for the Fifth Circuit is considering a groundbreaking move requiring attorneys to validate the accuracy of generative artificial intelligence (AI) materials before filing them with the court. This potential shift in protocol, detailed in a notice released by a court employee on Tuesday, signals a heightened focus on the accountability of legal professionals in utilizing AI-generated content.
Proposed Rule Changes
The New Orleans-based appeals court is contemplating an amendment to its certificate of compliance, introducing a new stipulation obliging attorneys to confirm the accuracy of AI-generated materials. Attorneys guilty of making a “material misrepresentation” on this certificate may face sanctions. The proposed rule changes are currently open for public comments until January 4, 2024, reflecting a commitment to soliciting input from legal practitioners and stakeholders.
Unprecedented Move
If implemented, the Fifth Circuit would become the first US appeals court to mandate such a verification requirement for attorneys practicing within its jurisdiction. This development underscores the evolving landscape of legal practices and the need for regulatory adjustments in response to advancements in AI technology.
Preceding Initiatives
This move follows the lead of the US District Court for the Eastern District of Texas, which has already adopted a rule effective from December 1, instructing lawyers to review and verify any computer-generated content produced by AI tools to ensure compliance with relevant standards. Additionally, US District Judge Brantley Starr of the Northern District of Texas initiated a certification requirement for lawyers appearing in his court, emphasizing the necessity for independent verification of AI-generated materials due to the potential for hallucinations and bias in current AI platforms.
Regional Perspectives
Since Starr’s court falls under the jurisdiction of the Fifth Circuit, encompassing federal district courts in Texas, Mississippi, and Louisiana, this move could set a precedent for AI accountability across a substantial portion of the United States. US District Judge Fred Biery of the Western District of Texas also actively reminds attorneys of their professional responsibility in AI, emphasizing honesty and validation of pleading contents for accuracy and authenticity.
Legal Ramifications
The necessity for AI accountability gained further prominence when US District Judge Kevin Castel of the Southern District of New York sanctioned two attorneys earlier this year for submitting a legal brief containing fictitious case citations generated by an AI tool. This incident highlights the potential legal consequences that may arise when AI-generated content is not diligently validated.
Don’t be a silent ninja! Let us know your thoughts in the comment section below.