Research and writing have been one of the primary functions of lawyers for hundreds of years. In 2026 lawyers are still analyzing fact patterns in light of the law on a daily basis. While much remains the same despite the passage of time, the legal profession is now at a watershed moment due to the explosion of AI. It presents lawyers and their clients with opportunity, but also risk. And the new landscape has culminated in proposed legislation known as SB 574.
SB 574, which recently passed the California Senate, would require an attorney who uses generative AI to take reasonable steps to verify the information and to ensure that certain confidential or nonpublic information is not entered into the AI system. While the law sounds rational on its face, is it really a good idea to legislate this when lawyers are already obligated by their own professional standards to use it responsibly? Maybe it’s too much too soon?
When I started practicing law in 1996, we had law libraries with dark wood and many shelves with hardcover books with names like Black’s Law Dictionary, Standard California Annotated Codes, and Official Case Reports. An associate could lose themselves in these dusty rooms for hours.
Then came the advent of electronic legal research with Westlaw and Lexis. The tools were basic and very slow, but, wow, what an improvement over the books.
Now fast forward to 2024, when AI tools worked their way into the mainstream consciousness of lawyers, and it became possible to generate a legal brief simply by asking ChatGPT to do so. It seemed too good to be true, and, indeed, it was.
Many lawyers remember when the New York case of Mata v. Avianca hit the national news.
An otherwise unremarkable lawsuit, the case made waves in the spring of 2023 after the plaintiff’s lawyer used AI to generate a motion that was full of hallucinated citations and nonexistent legal quotes. The lawyer was sanctioned and ordered to send letters to each real judge identified as the author of the fake opinions. The court highlighted the harms flowing from such a brief, including the time and money wasted in exposing the deception and the harm to the reputation of the system of justice.
Many thought this was a one-off. Surely lawyers would learn that it is a bad idea to ask ChatGPT to generate a memo or a brief. But, alas, there has been incident after incident of briefs with fake citations.
Why has this occurred? It is my experience that most lawyers are well-intentioned, but also busy, with clients demanding maximum efficiencies. Utilizing AI to assist with the analysis is hugely tempting, and many lawyers are still unaware of the practical safeguards that need to be employed.
Next came an opinion published by the Second District Court of Appeal in September 2025 – Noland v. Land of the Free. Nearly all of the legal quotations in the plaintiff’s opening brief, and many of the quotations in the reply brief, were fabricated by AI, though the attorney was unaware. The court published the opinion “as a warning,” noting that “no brief… should contain any citations… that the attorney responsible for submitting the pleading has not personally read and verified.”
The profession of law was on notice – and stressed out. Everyone was scrambling to figure out which AI tools were reliable and which ones weren’t. Partners began to panic about the best way to supervise the associates using then AI. Law firms began to question whether they could co-counsel with other law firms that might use AI. Lawyers struggled to figure out how to quadruple check the many citations and quotes in the dozens of pages of briefing that may be filed in any given day. Legal ethics attorneys such as myself suddenly had reams of new business.
And onto the scene came SB 574, introduced by a California Senator in February 2025. It passed out of the Senate on January 29, 2026. In their Executive Summary, the Senate Judiciary Committee stated that the “bill seeks to enact basic guidelines for the use of generative AI by attorneys and arbitrators.”
Though not described in any detail in the Executive Summary, the reality is that lawyers across the country are already regularly facing sanctions hearings, discipline from regulators, and adverse action by clients for AI hallucinated citations. Many of these lawyers are conscientious and hard-working practitioners who may have not even utilized AI themselves, but were affiliated with other lawyers who did. The profession is seriously reeling from these consequences and trying to catch up on the technology, the landscape, and the ethical requirements. This AI revolution and the attendant consequences in the law profession all happened in the blink of an eye.
So, is it a good idea to enact a law that, in sum and substance says, in utilizing AI, a lawyer must understand the technology, respect client confidentiality, and ensure truthfulness to the court? But these are already the ethical standards governing lawyers.
Practitioners must be competent (RPC 1.1) and diligent (1.3). They must respect confidentiality even at their own peril (1.6, Business & Professions Code section 6068(e)).
And they certainly must employ candor with the Court (3.3).
Enacting a new law when there are multitudes of professional standards already in place may not move the ball on improving the profession or protecting clients. As a practical matter, what will move that ball is enhanced training on the existing standards, and, more importantly, vigorous tutorials on exactly how to safely utilize AI in performing the day-to-day tasks of the profession. In my experience, the well-meaning lawyers across the state want to learn the practical tips on how to interact with AI. We need to set up the systems and the processes to help them at this deeply practical level – and the need for legislation can be later evaluated as the rapidly shifting landscape settles.
Heather Linn Rosing is a founding partner at Rosing, Pott & Strohbehn, LLP
per ChatGPT
A law degree, while important to society, is not rooted in scientific principles or empirical validation.
Unlike medicine or engineering, law does not operate on reproducible experiments or universal truths.
Legal outcomes depend on interpretation, precedent, and persuasion rather than measurable evidence.
Laws evolve with politics, culture, and power structures—not with scientific discovery.
Courts do not determine objective truth; they resolve disputes within human-constructed frameworks.
This makes law a vital civic institution, but not a science-based professional discipline.
True legal competence is built through experience, judgment, and contextual understanding.
Over time, inconsistency and human bias have become systemic challenges in legal systems.
Artificial intelligence offers an opportunity to apply legal rules more consistently and transparently.
The future of law may lie in experience-driven practice, guided and augmented by AI, rather than academic credentialism alone.
