Tiled background

Confidentiality is one of the most fundamental ethics in the legal profession (1). From the earliest stages of training, lawyers are immersed in a culture of silence. They are expected to keep quiet, stay mute, and feign ignorance regarding the secrets of their clients. Regardless of how clean, murky, or uncomfortable a client's brief may be, lawyers are bound by rules of professional conduct not to disclose it. This duty is not merely a tradition. It is a cornerstone of professional privilege and of the trust that underpins the lawyer-client relationship.

In the era of generative artificial intelligence (AI) the boundaries of confidentiality are being tested. As legal professionals increasingly integrate tools such as ChatGPT, Copilot, Gemini, and other AI assistants into their daily workflows, the question arises: can lawyers truly use AI without compromising client confidentiality?

How Generative AI Works

Generative AI systems are powered by large language models (LLMs), trained on vast datasets to generate human-like text responses. These systems process user inputs like prompts, questions, or uploaded documents, to provide outputs such as draft contracts, summaries, and legal research suggestions. These inputs do not disappear after use. They are transmitted to cloud storage, which may be operated by the AI provider or by third-party vendors (2).

The same qualities that make these tools powerful also make them risky. Many public AI platforms store and analyse user inputs to improve performance. Unless explicitly configured otherwise, they may retain this data, often on servers hosted by third-party providers, and use it to train future models (3).

Unlike a human lawyer, AI does not distinguish between sensitive and trivial information. Every input is data; every document is a potential learning opportunity. Once information is submitted, it often leaves the user's control.

Confidential client data entered into an AI system may be stored, analysed, or reused in ways lawyers cannot control. Even if a data breach occurs and liability is contested, the core responsibility remains: lawyers must ensure that confidential information never leaves their control.

The Confidentiality Dilemma

The risks to legal professionals are clear. Uploading a client's contract, non-disclosure agreement, or facility agreement into an AI tool may feel convenient, but it risks exposing sensitive data to platforms that are not subject to legal privilege or professional ethics.

For example, OpenAI has disclosed that conversations with its chatbot may be used to improve its models unless chat history is disabled. More concerningly, it also notes that user interactions may be disclosed in response to legal requests. Information shared with an AI platform can, therefore, potentially be used in litigation. These interactions are not protected by legal professional privilege. Thus, sharing data with an AI platform can be interpreted as sharing it with a third party, thereby breaking privilege.

Lawyers risk professional misconduct findings, data protection penalties, and reputational damage. A breach of confidentiality is not only a regulatory issue; it erodes the foundational trust between lawyer and client.

Regulatory Expectations: EU, UK, and California

The Council of Bars and Law Societies of Europe (CCBE) sets out the Code of Conduct for European Lawyers, which applies across EU member states. The Code designates confidentiality as a "fundamental and primary right and duty" of the lawyer. Importantly, this duty is not time-limited. Confidentiality also applies to any and all documents prepared by the lawyer, to all those delivered by the lawyer to their client and to all communications between them. Lawyers are also required to ensure that associates, staff, and any third parties involved in service delivery adhere to the same standard of confidentiality. In practice, this means that using a public AI tool that stores or processes confidential client data could violate this duty, as the lawyer cannot guarantee that the third-party tool upholds the same strict standards. Delegating to a third-party AI system does not relieve the lawyer of responsibility.

Interestingly, the Code goes further by requiring that all individuals with whom a lawyer collaborates, whether employees or non-lawyers or external parties, must also uphold these confidentiality obligations (4). This provision clarifies that collaboration with non-lawyers is permitted, but only if the lawyer takes all reasonable measures to ensure that these persons comply with the same confidentiality standards. In the digital context, this raises a critical question: can a lawyer mandate an AI platform, operated by a third party, to keep secrets in the same way a human employee or contractor would?

United Kingdom (UK)

In England and Wales, the Solicitors Regulation Authority (SRA) governs legal practice. Under Paragraph 6.3 of the SRA Code of Conduct, solicitors must keep the affairs of both current and former clients confidential unless disclosure is required by law or the client consents. The SRA has issued guidance emphasising that lawyers remain personally responsible for confidentiality, even when outsourcing or using technology. If a solicitor uses an AI system and that system compromises client data, the solicitor and not the AI provider is liable. There is a duty to assess the risks of data breaches, conduct due diligence on service providers, and maintain safeguards that align with professional obligations. The bottom line is that confidentiality cannot be outsourced.

California (United States)

The California State Bar's Committee on Professional Responsibility and Conduct (COPRAC) has taken a firm position on the use of generative AI. In a 2024 advisory opinion, COPRAC stated that lawyers must never input confidential client information into any generative AI platform that uses such data for model training or to generate responses for other users. The opinion recommends that lawyers consult IT professionals before using any AI tool, carefully review the platform's data storage and retention policies, and avoid inputting sensitive details unless robust safeguards are confirmed. COPRAC's position sets a high bar: the duty of confidentiality is non-delegable and must be maintained regardless of the technology used.

Using AI Responsibly: Practical Safeguards

Despite these risks, AI tools can still be used responsibly if appropriate safeguards are in place. Legal professionals should adapt their practices to maintain confidentiality while leveraging the productivity benefits of AI.

Practical steps include:

1. Anonymise Inputs

  • Avoid entering client names, company names, monetary values, or identifying details
  • Use generic terms or placeholders (e.g. 'Party A' and 'Party B' instead of actual names)

2. Disable Memory and Data Sharing

  • Turn off options such as "Improve the model for everyone" or chat history
  • Ensure conversations are not retained or used for model training

3. Consult IT and Data Security Experts

  • Consult IT professionals and verify how the AI provider stores, secures, and processes data
  • Understand where data is located and whether it is encrypted

4. Establish Internal Policies

  • Develop firm-wide guidance on acceptable AI use
  • Train staff to follow confidentiality protocols when using these tools

Final Thoughts

Generative AI is undoubtedly a useful tool for legal professionals. However, the duty to protect client secrets is not optional, negotiable, or secondary to speed. The duty of confidentiality is not a burden to be set aside for technology; it is the very foundation of trust between lawyer and client, and it must remain inviolate in the digital age. As AI becomes embedded in legal workflows, the legal professional must adapt without sacrificing their core values. Efficiency will shape the future of law, but vigilance will define its integrity.

Bibliography

  1. https://legallens.org.uk/legal-ethics-what-every-client-and-solicitor-should-know/
  2. Anne Håkansson and Gloria Phillips-Wren, ‘Generative AI and Large Language Models – Benefits, Drawbacks, Future and Recommendations’ (2024) Procedia Computer Science 246, 5458 https://doi.org/10.1016/j.procs.2024.09.689
  3. Reece Rogers, ‘How to Stop Your Data from Being Used to Train AI’ (Wired, 15 August 2023) https://www.wired.com/story/how-to-stop-your-data-from-being-used-to-train-ai/
  4. Council of Bars and Law Societies of Europe (CCBE), Model Code of Conduct for European Lawyers (2021) https://www.ccbe.eu/EN_DEONTO_2021_Model_Code.pdf

CONTACT SOPHIE NKWAP

For more information, enquiries, contributions and article submissions

More from the blog

All blogs