A California federal court in Lacey v. State Farm recently issued a dramatic example of the consequences to lawyers and their clients of trusting artificial intelligence. In what the court described as a “collective debacle,” a large team of high-profile attorneys submitted a brief with faulty – hallucinatory – AI legal research, with damaging effects on their client’s ability to prove her case.
The lawyers and the parties were involved in a dispute over responding to discovery (answering an opponent’s questions and providing documents relevant to the issues in the case). This is a common experience for anyone who’s been through litigation, but what happened next is an important cautionary tale.
The parties’ discovery dispute involved a fight over which documents and what information was protected by attorney-client privilege. That issue can get thorny, and the court ordered the lawyers to submit some additional written arguments over the privilege question. This also is not particularly unusual. But then the plaintiff’s lawyers filed a brief based on AI-generated “legal research” that no human bothered to fact-check.
Although multiple attorneys, at multiple levels of the law-firm hierarchy, participated in preparing the brief, no one along the way stopped to check whether the supposed law – mostly citations to previous court cases – was accurate, or even real. One attorney used AI tools to create an outline for the brief. He didn’t examine whether the research results were real, and he sent the outline to co-counsel, without telling them he’d used AI. The team of lawyers at the second firm also didn’t check to see if the supposed court rulings were real.
Two of the cases cited in the brief that was filed with the court were entirely made-up. Other references were incorrect, with several fabricating quotes from the judges and/or misrepresenting the judges’ rulings.
This is … not good. And it gets worse. When the “Special Master” – the court-appointed neutral dealing with the discovery disputes – realized the lawyers cited totally fake cases, he asked the lawyers to explain themselves. They submitted a new brief that omitted only the most glaring hallucinations but left the other AI errors unchanged.
Worse, still, the court in Lacey came dangerously close to relying on the made-up law. To quote the Special Master:
Directly put, Plaintiff’s use of AI affirmatively misled me. I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist. That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order.
The consequences to the lawyers and – more importantly – to the party they represented are even scarier. The court awarded sanctions of $31,000 and ruled against the plaintiff on the discovery dispute. This means that because of a reckless error by counsel, a party was unable to obtain evidence that could have swung the outcome of her case.
AI is fascinating and has great potential. Smart law firms and employers are staying on the cutting edge, learning how this technology can improve workflows and increase efficiencies. But in the area of legal research, AI often gives flat-out wrong information. It is no excuse for research by skilled human – and anyone who thinks otherwise is on a collision course with angry judges and severe sanctions.
You can read the court’s ruling here.