I’ve thought a little more about the order issued last week by the Circuit Court of the City of Richmond concerning the use by attorneys and pro se litigants of artificial intelligence.
I’ve had some interesting conversations with plaintiff and defense lawyers concerning the order, and it’s one of those issues where I think we all are on the same page. Some of you have asked that I provide you a copy. This link will get you a copy: https://www.mottleylawfirm.com/library/1-13-26-Local-Rule-on-Artificial-Intelligence.pdf
My first thought concerning the order is more of a question. What precipitated it? Did an incident occur in Richmond in which a lawyer blatantly used AI to draft a pleading containing “hallucinations” or fictitious cases, as happened in the well-known New York case? Or was the order just the result of the court wanting to weigh in on a contemporary issue? Have other courts in Virginia issued similar orders? If you have any insight, I’d like to hear about it.
I appeared in Richmond Circuit Court last week. I was prepared to ask the judge off the record what triggered the order. But we had a substitute judge from Fairfax. We asked a scheduling clerk in the Clerk’s office if we could have a copy of the order; but they didn’t seem to know what we were even talking about.
My second thought is that the order seems unnecessary and redundant because of existing ethical and statutory duties. I assume the general intent of the order is to convey to litigants that using AI in court is risky business. It’s kind of a “shot across the bow”, so to speak. Okay, fair enough.
But what concern does the court have that is not already covered by Code § 8.01-271.1? If a lawyer or pro se litigant uses AI during the creation of a pleading that, when filed, violates Code § 8.01-271.1, then why does it matter that AI had some role in the process of creating it? If the statute is violated, it’s violated. And, as we’ve seen in New York and in jurisdictions across the country, courts already have the statutory and rule-based authority to sanction people for citing fake cases derived from misuse of the AI tools. The court’s own order says that using AI is not “prohibited.” So, again, I ask, what’s the issue the court is seeking to address that existing law doesn’t already cover?
Given the redundancy with Code § 8.01-271.1, I fail to see the purpose in having a local rule or order that requires lawyers and litigants to “certify” when AI has been used by them. We don’t require such a certification when, for example, a lawyer uses Lexis to research an issue as opposed to manually pulling bound volumes of the Virginia reporters off the shelves of the Supreme Court of Virginia’s law library. (On second thought, perhaps that is now required in the City of Richmond because Lexis itself has robust AI features.)
Another thought I have is the work product doctrine and the implications the order has for it. When a lawyer chooses to use AI when working on a case, is that choice not work product? AI is just a tool. It is one of many tools available to help us do our jobs more efficiently. Why should a lawyer (or litigant) have to disclose when they have used this particular tool versus the myriad other tools available to them? When a lawyer uses a private investigator, does that fact need to be certified? No. When a lawyer uses the internet, does that need to be certified? Of course not. I could go on and on. To me, the requirement that lawyers disclose in a certification when their work product involves the use of AI is perhaps the most bizarre and troubling aspect of the new rule.
What adds to my concern is the differing degrees to which a “certification” such as that suggested by the court in its order may be interpreted by the judge who reads the certification. If I certify that “artificial intelligence tools were utilized in the drafting, creation, enhancement, or modification” of a pleading, what thoughts will be going through the judge’s mind when they read that? Will that be interpreted as me telling the court that “Chat GPT drafted this for me, and I signed it”? Or will the court interpret that as me certifying that I looked something up on the internet at some point during the creation of my filing? Those are vastly different pictures that could be created in the judge’s mind, which leads me to wonder how detailed I should get in my certification. Pretty soon, we’ll all be explaining in various “certificates” how we drafted and researched something, which seems silly. But these are the sort of unreconciled issues the order raises in my mind, and that leads to my last thought.
My final thought on the order is more practical in nature. The order exhibits a drastic underestimate by the court of the degree to which AI is used in today’s profession. Even doing a basic internet search on Google involves the use of AI. Whenever a lawyer uses Lexis and its research engine tools, they are using AI. Whenever a lawyer uses Microsoft Copilot to assist them in drafting or editing a Word document, that is using AI. AI is used to review and summarize medical records. I could go on and on.
Given the vast reach and use of AI in our profession, it becomes difficult to think of tasks we perform that do not, on some level, involve the use of AI. Given that reality and the breadth of the court’s order, I am beginning to wonder whether we should just file a blanket certificate at the beginning of a case saying, “I just want everyone to know that AI will have some hand in creating virtually everything I file in this case.”
So, there you have it, my thoughts on the new local rule/order. It’s an interesting topic, and I appreciate the court stepping out on a limb to address an interesting issue.