- within Insolvency/Bankruptcy/Re-Structuring topic(s)
When a Nevada County prosecutor cited three completely fabricated cases in court—and then blamed "scrivener's errors"—the California Supreme Court had seen enough.
The unanimous decision in Kjoller v. Superior Court of Nevada County marks a turning point in how California courts will handle AI-generated hallucinations in legal filings. Combined with the recent passage of SB 574 by the California Senate, the message to practitioners is unmistakable: the era of plausible deniability for AI mistakes is over.
The Case That Changed Everything
The facts in Kjoller read like a cautionary tale written specifically for the AI age. A Nevada County District Attorney submitted a response brief citing eight cases. Three didn't exist at all. Three more existed but said nothing resembling what the DA claimed. Even a cited constitutional provision was irrelevant to the point being argued.
When opposing counsel discovered the fabrications and filed for sanctions, the DA's response made matters worse. First came a phone call claiming she was just "going too fast in her research." Then came a brief characterizing wholesale fabrication as "scrivener's errors"—the legal equivalent of claiming the dog ate your homework.
The Court of Appeal twice denied sanctions motions without explanation. But the California Supreme Court wasn't buying it. In a unanimous order, the Court directed the Court of Appeal to issue an order to show cause why sanctions should not be imposed. More significantly, the Court gestured to the civil referee process governed by California Code of Civil Procedure §§ 638-640 as a mechanism for the trial court to investigate and resolve the matter—essentially green-lighting a formal inquiry into whether the DA had relied on AI hallucinations.
The Cover-Up Makes It Worse
The Supreme Court's decision to recommend a referee appointment signals something crucial: how attorneys respond after discovering AI errors matters as much as the errors themselves.
The Court was clearly influenced by United States v. Hayes, where the Eastern District of California sanctioned an attorney who also blamed "hasty" drafting for AI hallucinations. That court didn't just impose monetary penalties—it ordered the sanctions notice be sent to every state bar where the attorney was licensed and to every judge in the district. A permanent, public record of professional failure.
Kjoller follows the same trajectory. By denying responsibility and offering implausible explanations, the DA transformed a correctable mistake into an ethics investigation that could result in career-altering consequences.
The lesson is stark: attorneys who immediately acknowledge AI errors and take corrective action face manageable consequences. However, those who deflect, deny, or minimize face investigations, public embarrassment, and escalating sanctions.
The Myth of "Reliable" Legal AI Tools
Many practitioners assume that premium legal research platforms are immune to AI hallucinations. The data tells a different story.
Research presented in the Kjoller petition reveals that AI tools from LexisNexis and Thomson Reuters—the gold standard names in legal research—hallucinate between 17% and 33% of the time. These aren't experimental startups; these are established platforms with decades of credibility. Yet one in five citations generated by their AI tools may be fabricated.
For context, general-purpose models like ChatGPT hallucinate legal queries between 58% and 88% of the time. The specialized tools are better, but not reliable enough to justify blind trust.
A fabricated case is misconduct regardless of which platform generated it. The glossy marketing materials and brand recognition of premium vendors don't change that fundamental reality. As the Kjoller petition states plainly: "using AI to generate briefing without carefully cite checking the drafts often will result in the citation of fabricated authorities, which is misconduct."
Law firms cannot outsource verification responsibility to technology vendors. If anything, AI-generated research demands more scrutiny than traditional methods, not less. Every citation must be independently verified, every case read in full, every proposition confirmed against the actual source material.
This Isn't Just About Criminal Law
Kjoller involves criminal defense, where AI hallucinations can have "horrific, life-shattering consequences" for defendants facing incarceration. The stakes in criminal cases naturally heighten judicial concern.
But the California Supreme Court's reasoning applies with equal force to civil practice. The Court's message transcends practice areas: submitting unverified AI outputs to any court invites significant sanctions, including formal investigations into your competence and ethics.
The fundamental obligations haven't changed. Attorneys must present truthful information to courts. They must conduct adequate research. They must verify their sources. AI hasn't automated these responsibilities away—if anything, it's placed them under a microscope.
The Legislature Moves to Codify Verification Requirements
Two weeks after Kjoller, the California Senate passed SB 574, which would require attorneys to take "reasonable steps" to verify all AI-generated materials, correct hallucinations, and remove biased content. The bill also prohibits inputting confidential client information into AI tools and bars arbitrators from delegating decisions to AI.
SB 574 was modeled after existing judicial AI rules and a recent sanctions case—but the timing and substance align perfectly with Kjoller's themes. The trend is unmistakable: courts are sanctioning lawyers for unverified AI output, and legislatures are moving to make verification protocols mandatory.
Whether SB 574 becomes law or not, the writing is on the wall. Practitioners who wait for formal legislative mandates are already behind. The standard of care is being established now, case by case, sanction by sanction.
What Practitioners Must Do Now
The implications of Kjoller and the legislative momentum behind SB 574 demand immediate action:
Implement mandatory verification protocols. Every AI-generated citation must be independently verified. Every case must be read in full. Every legal proposition must be confirmed against original sources. Make verification a required step in your quality control process, not an optional safeguard.
Apply equal scrutiny to all AI tools. Don't assume premium platforms are hallucination-proof. Whether research comes from ChatGPT or LexisNexis AI, the verification requirements are identical.
Train your team. Ensure everyone using AI tools understands both the technology's limitations and the professional consequences of submitting fabricated authority. Make it clear that "I didn't know" isn't a defense.
Own your mistakes immediately. If you discover AI hallucinations in filed documents, acknowledge the error promptly and file corrections. The cover-up is worse than the crime.
Protect client confidentiality. Never input confidential information into AI tools unless you have explicit protocols ensuring compliance with ethical obligations.
The California Supreme Court and Senate have made their positions clear. AI is a tool, not a substitute for professional judgment. Attorneys who treat it as such will benefit from its capabilities. Those who use it as a shortcut will face consequences that could define their careers—for all the wrong reasons.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]