In the era of the generative AI economy, major law firms are diving headfirst into the realm of red-teaming, stress-testing artificial intelligence (AI) models to ensure compliance with legal standards. This innovative approach involves melding legal minds with data scientists, with the primary goal of preventing AI-driven systems from inadvertently leading companies into legal jeopardy.
The Crucial Role of Algorithmic Audits
As corporations deploy chatbots and other generative AI tools, the necessity of scrutinizing these systems for potential legal pitfalls becomes paramount. Firms like DLA Piper are pioneering “algorithmic audits” to guarantee adherence to legal standards in areas such as bias, compliance, copyright, and privacy.
Want to know if you’re earning what you deserve? Find out with LawCrossing’s salary surveys.
Big Law’s Investment in AI
While the legal industry adapts to the transformative potential of AI, only a select few firms are actively investing in red-teaming and algorithmic audits. DLA Piper, a global player with a revenue of $3.7 billion, stands out among those embracing this innovative approach. Other major law firms, including Cooley, Dentons, Allen & Overy, and Gibson, Dunn & Crutcher, are also leaning into the possibilities that AI presents for their practices.
Evolution of AI Practices
DLA Piper initiated its AI practice in 2019, focusing initially on narrow-purpose mathematical models. However, with the rise of generative AI, the firm has had to adapt its testing methodologies. Unlike traditional machine-learning algorithms, generative AI presents challenges due to its almost infinite range of prompts and outputs.
Building Expert Teams for AI Testing
DLA Piper’s strategy involves assembling teams of lawyers specializing in various fields, such as health, financial services, and insurance, alongside data scientists with legal expertise. In a strategic move, the firm recently recruited a team of data scientists from Faegre Drinker Biddle & Reath, led by Bennett Borden, a seasoned attorney-data scientist with extensive legal tech experience.
The Importance of Crash-Testing LLMs
DLA Piper’s 100-member team, under the leadership of Borden and Danny Tobey, a doctor and software entrepreneur, is actively working on developing new mechanisms to test generative AI systems. These efforts include collaborating with insurance regulators to understand how traditional AI models should be tested, ensuring trustworthiness and anti-bias measures.
Shaping Best Practices and Regulations
Borden emphasizes the excitement of working in an area where no agreed-upon standards exist. The firm sees its role as not only stress-testing AI but also working with clients to establish best practices, contributing to the conversation with regulators and legislatures to shape new regulations.
Beyond Algorithmic Testing
Algorithmic testing is just one facet of the evolving landscape for Big Law. Firms are extending their services to include defending clients in litigation related to AI bias or inaccuracy claims, offering assistance in various AI solutions, and advising on governance policies.
The Uneven Adoption of AI in Big Law
While many law firms recognize both the threats and opportunities posed by generative AI, not all have embraced the technology wholeheartedly. Some adopt a cautious approach, testing the waters with limited use, while others remain skeptical, awaiting resolutions to critical questions surrounding data security and accuracy.
Winners and Losers in the AI Arms Race
As the legal industry navigates the AI arms race, adaptability emerges as a key factor for success. The winners are expected to be those who adeptly integrate AI into their practices, whether they are large law firms or nimble entities that successfully challenge industry giants. The losers, on the other hand, risk falling behind by either resisting adaptation or placing misguided bets on the wrong technology.
Don’t be a silent ninja! Let us know your thoughts in the comment section below.