Florida Bar Advisory Opinion 24-1 Gives Green Light to Generative AI Use by Lawyers – With Four Ethical Caveats

February 5, 2024
Lawyers for the Profession®

Brief Summary

The Florida Bar has concluded that a lawyer may utilize generative artificial intelligence (AI) so long as the lawyer can reasonably guarantee compliance with the lawyer's ethical obligations.

Complete Summary

The release of ChatGPT-3 in November 2022 prompted a wide-ranging discussion on lawyer use of generative AI in the practice of law. Generative AI is a "deep-learning model" that compiles data to generate statistically probable outputs when prompted and is capable of analyzing documents and drafting entire briefs.

While it has the potential to improve a lawyer's efficiency, generative AI is still in its infancy and prone to "hallucinate" or create answers that sound convincing. The Florida Bar cited a recent case in the Southern District of New York where a federal judge sanctioned two lawyers and their law firm following their use of false citations created by generative AI.[1]

Citing the Rules Regulating the Florida Bar, which govern a lawyer's ethical duties and are adopted from the ABA Model Rules,[2] the advisory opinion addressed four specific ethical pitfalls when using generative AI:

1. Confidentiality

A lawyer's ethical duties under Rule 1.6 are broad and far-reaching. Absent the client's informed consent or some applicable exception, a lawyer may not reveal confidential information. This advisory opinion recommends that a lawyer obtain the affected client's informed consent prior to utilizing a third-party generative AI tool if the utilization would involve the disclosure of confidential information.

Another consideration under Rule 1.6 when using generative AI is a lawyer's duty to prevent the inadvertent or unauthorized disclosure of confidential information. Because generative AI is self-learning, it may, as it continues to add inputs to existing parameters, reveal a client's information in response to future prompts by third parties. The opinion recommended the following to protect client information:

  1. Ensure that the AI provider is required to preserve confidentiality, that the obligation is enforceable, and that the provider will notify the lawyer in the event of a breach;
  2. Investigate the provider's security measures and policies; and
  3. Determine whether the provider retains information submitted by the lawyer before and after the discontinuation of services.

Confidentiality concerns may be mitigated by the use of an in-house AI program where it does not disclose confidential information to a third party, thus removing the obligation on the lawyer to obtain informed consent.

2. Oversight

The advisory opinion addressed generative AI use by a nonlawyer and applied the standards under Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance):

  1. First, a lawyer must ensure that the firm has policies to reasonably ensure that the conduct of a nonlawyer assistant is compatible with the lawyer's own professional obligations.
  2. Second, a lawyer must review the work product of a generative AI, including verifying the accuracy and sufficiency of all research performed by generative AI.
  3. Third, these duties apply to nonlawyers both within and outside of the law firm, which means that a third-party-operated generative AI does not negate the lawyer's requirement to ensure that its actions are consistent with a lawyer's professional obligations.

A lawyer should also consider what tasks it may ethically delegate to generative AI. For example, a lawyer cannot delegate to AI functions that require a lawyer's personal judgment and participation such that they would constitute the practice of law. If a lawyer is using AI to conduct interviews with prospective clients, the obligations set forth in Rule 1.18 (Duties to Prospective Client) would apply.

Those parameters would mean that the AI program must:

3. Legal Fees and Costs

Rule 1.5 prohibits attorneys from collecting an unreasonable fee. The increased efficiency from the proper use of generative AI must not result in duplicate charges or falsely inflated billable hours. Based on existing standards, the Florida Bar reached the opinion that lawyers should inform a client, preferably in writing, of the lawyer's intent to charge the client the actual cost of using generative AI.

If the actual cost associated with a client's matter cannot be determined, a lawyer should not prorate the periodic charges of generative AI (if any) but should instead account for those charges as overhead. Finally, the opinion concluded that while a lawyer may charge a client for reasonable time spent on case-specific research, a lawyer should be careful not to charge for time spent developing minimal competence in the use of generative AI.

4. Advertising

A lawyer should be careful when using generative AI chatbots for advertising and intake purposes, as the lawyer will be ultimately responsible should the chatbot provide misleading information to prospective clients or communicate in a manner that is inappropriately intrusive or coercive.

To safeguard against such risks, a lawyer must inform prospective clients that they are communicating with an AI program. Additionally, a lawyer using an AI chatbot should consider including screening questions that limit the chatbot's communications if another lawyer already represents that person.


A lawyer may ethically utilize generative AI technologies, but only to the extent that a lawyer can reasonably guarantee compliance with all existing ethical obligations.

The Florida Bar's advisory opinion identified a host of rules of professional conduct that are implicated through the use of generative AI in a law practice setting, including Rules 1.1 (Competence), 1.5 (Fees), 1.6 (Confidentiality), 1.18 (Duties to Prospective Client), 5.3 (Responsibilities Regarding Nonlawyer Assistance), 5.5 (Unauthorized Practice of Law), and 7.1-7.3 (Communication Concerning a Lawyer's Services and Solicitation of Clients).[3]

As generative AI is still in its infancy, its use presents ethical pitfalls that should give lawyers some pause. Like any technology used in the practice of law, lawyers should continue to develop competency and understand the risks and benefits inherent in the technology.

[1] Mata v. Avianca, 22-cv-1461, 2023 WL 4114965, at 17 (S.D.N.Y. June 22, 2023).

[2] For multi-jurisdictional applicability, this alert will cite to the Model Rules (for example, Rule 1.6 instead of Rule 4-1.6) rather than the Florida Rules, which are cited by the opinion. The reader is encouraged to consult their own jurisdiction’s ethics rules, as they may differ from the ABA Model Rule or the Florida Rules.

[3] In the opinion, this discussion was framed around Rule 4-7.13 (Deceptive and Inherently Misleading Advertisements) of the Rules Regulating the Florida Bar, which has no Model Rule counterpart but is mostly analogous to the principles espoused in Rules 7.1-7.3.