This article was originally published in Indiana Lawyer.
Advances in artificial intelligence (AI) have given us computer programs capable not only of summarizing and analyzing existing data, but also creating content. Generative AI creates new documents, pictures and music in response to questions or instructions. Generative AI programs such as ChatGPT, Bard and Sparrow use large language models to process and generate natural language.
AI’s ability to understand and respond in natural language allows the development of software to assist lawyers in analyzing large volumes of information in documents, deposition transcripts and court records. Generative AI can even draft complaints, briefs and other legal documents. AI promises speed and accuracy in handling legal tasks, significantly lowering costs for the firm and client. But as a recent case from New York illustrates, lawyers must use care when relying on AI.
The Perils of Using AI to Draft Legal Briefs
In Mata v. Avianca, the court sua sponte issued sanctions against attorneys in what is likely the first instance of professional misconduct in the use of AI to draft legal briefs. Mata v. Avianca, Inc., No. 22-CV-1461 (PKC), 2023 WL 4114965 (S.D.N.Y. June 22, 2023). Attorneys in that action filed a motion to dismiss, which cited several cases that strongly called for judgment in Mata’s favor. The court would have likely found for Mata but for one slight complication — the brief was filled with citations to fake cases that were invented by ChatGPT. After concerns were raised as to the validity of the cases, Mata’s counsel filed an affidavit that annexed copies of portions of the purported cases without disclosing the reliance on ChatGPT.
Under Rule 11 of the Federal Rules of Civil Procedure, a court may sanction an attorney for, among other things, misrepresenting facts or making frivolous legal arguments. The court issued Rule 11 sanctions against Mata’s counsel and their firm for conduct amounting to “subjective bad faith” because: (1) He had no experience with or knowledge of ChatGPT prior to using it and did not learn how it worked or the risks involved; (2) he did not check whether the cases cited in the brief were fake but instead advocated for them after being warned there was a high probability they did not exist; (3) he failed to disclose that he relied on ChatGPT in his affidavit to the court; and (4) he untruthfully asserted that ChatGPT “supplemented” his research when it actually served as his entire research.
Courts Now Requiring AI-Related Certifications
In response to concerns about such use of AI, some courts and judges require attorneys and pro se parties to certify whether they used AI to draft documents submitted for filing and to independently verify the accuracy of the AI-generated documents. E.g., N.D. Ill., Indiv. Prac. R., Fuentes, J. (Standing Order for Civil Cases Before Magistrate Judge Fuentes) (requiring attorneys and parties to identify any AI tools used to conduct legal research or to draft documents filed in court); E.D. Pa., Indiv. Prac. R., Baylson, J. (Standing Order Re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson) (requiring attorneys and pro se litigants to disclose use of AI and certify that they verified the accuracy of each citation); N.D. Tex., Indiv. Prac. R., Starr, J. (Mandatory Certification Regarding Generative Artificial Intelligence) (requiring attorneys and pro se litigants to file certification regarding artificial intelligence when filing their appearance); N.D. Tex. Bankr. General Order 2023-03 (In Re: Pleadings Using Generative Artificial Intelligence) (requiring attorneys and pro se litigants to verify that they have checked the accuracy of any language in court filings generated by AI); USCIT, Indiv. Prac. R., Vaden, J. (Order on Artificial Intelligence) (for any submission containing text drafted with the assistance of generative AI, requiring attorneys and parties to identify any portion of a document generated by AI, to specify the AI tool used and to certify that use of the AI tool did not disclose confidential information to unauthorized parties).
The Indiana Rules of Professional Conduct and AI
The use of AI implicates various ethical issues for law firms. Some key issues impacted by the Indiana Rules of Professional Conduct are discussed here.
Rule 1.1 of the Indiana Rules of Professional Conduct states: “A lawyer shall provide competent representation to a client (which) requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” Comment 6 to Rule 1.1 clarifies what “maintaining competence” requires: “(A) lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with technology relevant to the lawyer’s practice (and) engage in continuing study and education … .” (emphasis added).
This competence standard will likely require lawyers to be aware of the artificial intelligence tools they are using and learn to use them efficiently. It may need attorneys to detect when the tool is malfunctioning and generating insufficient or incorrect results and assess and manage such risks to ensure accuracy. Lawyers must be aware of changes to substantive law because of AI. E.g., Go2Net, Inc. v. C I Host, Inc., 115 Wash. App. 73, 60 P.3d 1245, 1251 (2003) (contract language should have factored in actions not only by humans but also machines).
Rule 1.4 requires attorneys to communicate with clients and, inter alia, “reasonably consult with the client about the means by which the client’s objectives are to be accomplished,” as well as “keep the client reasonably informed about the status of the case.” The communication requirements could mean lawyers have to inform their clients when non-standard AI tools (other than Westlaw, Relativity, etc.) handle legal work and possibly get their approval. Even if clients agree and share in the cost savings, the lawyer may still be liable for errors resulting from the use of the AI tool.
Rule 1.6 mandates that a “lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” This rule may require law firms to ensure the client data fed into an AI tool is not used for purposes of training the program and is kept safe and secure, with no privacy violations and no conflicts of interest. Law firms’ due diligence must ensure their AI and technology vendors do the same.
Rule 5.1 requires supervisors in law firms to reasonably ensure that “lawyers(s) conform to the Rules of Professional Conduct.” Further, under Rule 5.3, a partner must properly supervise nonlawyer “assistants” on a case and ensure their compliance with these rules. Firms may need to ensure their nonlawyers do not rely on AI tools to provide legal advice to clients and thereby engage in the unauthorized practice of law. Attorneys who fail to properly supervise nonlawyers in such situations could be liable for assisting in the unauthorized practice of law.
Suggested Practices to Navigate the Use of AI
Firms could require mandatory training for their attorneys to ensure they are familiar with how to use AI tools and the risks and hazards. Malpractice insurance policies will likely require firms to have robust policies, and checks and balances, in place when using AI to minimize claims of negligence.
If law firms use AI tools for data analytics and predictive analysis in determining possible outcomes of cases, costs of litigation and feasibility of claims, they may need to ensure the data sets used and results generated are accurate and fair. This will apply to trial strategies, as well.
In addition to ensuring their AI tools and vendors are meeting necessary standards, firms may need to confirm there is no bias in the data sets and results, and no use of copyrighted, proprietary or trade secret materials of third parties. Firms may need to ensure the AI technology they use is safe and secure from cyberattacks, malicious viruses and software, and other systemic vulnerabilities.
For more information, please contact the authors or any attorney with Frost Brown Todd’s Litigation practice group.