CONTACT SOPHIE NKWAP
For more information, enquiries, contributions and article submissions
Leeds Beckett University - City Campus,
Woodhouse Lane,
LS1 3HE
In this blog, Aduragbemi Odubela explains some of the issues surrounding AI within law, and the importance of confidentiality.
Confidentiality is one of the most fundamental ethics in the legal profession (1). From the earliest stages of training, lawyers are immersed in a culture of silence. They are expected to keep quiet, stay mute, and feign ignorance regarding the secrets of their clients. Regardless of how clean, murky, or uncomfortable a client's brief may be, lawyers are bound by rules of professional conduct not to disclose it. This duty is not merely a tradition. It is a cornerstone of professional privilege and of the trust that underpins the lawyer-client relationship.
In the era of generative artificial intelligence (AI) the boundaries of confidentiality are being tested. As legal professionals increasingly integrate tools such as ChatGPT, Copilot, Gemini, and other AI assistants into their daily workflows, the question arises: can lawyers truly use AI without compromising client confidentiality?
Generative AI systems are powered by large language models (LLMs), trained on vast datasets to generate human-like text responses. These systems process user inputs like prompts, questions, or uploaded documents, to provide outputs such as draft contracts, summaries, and legal research suggestions. These inputs do not disappear after use. They are transmitted to cloud storage, which may be operated by the AI provider or by third-party vendors (2).
The same qualities that make these tools powerful also make them risky. Many public AI platforms store and analyse user inputs to improve performance. Unless explicitly configured otherwise, they may retain this data, often on servers hosted by third-party providers, and use it to train future models (3).
Unlike a human lawyer, AI does not distinguish between sensitive and trivial information. Every input is data; every document is a potential learning opportunity. Once information is submitted, it often leaves the user's control.
Confidential client data entered into an AI system may be stored, analysed, or reused in ways lawyers cannot control. Even if a data breach occurs and liability is contested, the core responsibility remains: lawyers must ensure that confidential information never leaves their control.
The risks to legal professionals are clear. Uploading a client's contract, non-disclosure agreement, or facility agreement into an AI tool may feel convenient, but it risks exposing sensitive data to platforms that are not subject to legal privilege or professional ethics.
For example, OpenAI has disclosed that conversations with its chatbot may be used to improve its models unless chat history is disabled. More concerningly, it also notes that user interactions may be disclosed in response to legal requests. Information shared with an AI platform can, therefore, potentially be used in litigation. These interactions are not protected by legal professional privilege. Thus, sharing data with an AI platform can be interpreted as sharing it with a third party, thereby breaking privilege.
Lawyers risk professional misconduct findings, data protection penalties, and reputational damage. A breach of confidentiality is not only a regulatory issue; it erodes the foundational trust between lawyer and client.
The Council of Bars and Law Societies of Europe (CCBE) sets out the Code of Conduct for European Lawyers, which applies across EU member states. The Code designates confidentiality as a "fundamental and primary right and duty" of the lawyer. Importantly, this duty is not time-limited. Confidentiality also applies to any and all documents prepared by the lawyer, to all those delivered by the lawyer to their client and to all communications between them. Lawyers are also required to ensure that associates, staff, and any third parties involved in service delivery adhere to the same standard of confidentiality. In practice, this means that using a public AI tool that stores or processes confidential client data could violate this duty, as the lawyer cannot guarantee that the third-party tool upholds the same strict standards. Delegating to a third-party AI system does not relieve the lawyer of responsibility.
Interestingly, the Code goes further by requiring that all individuals with whom a lawyer collaborates, whether employees or non-lawyers or external parties, must also uphold these confidentiality obligations (4). This provision clarifies that collaboration with non-lawyers is permitted, but only if the lawyer takes all reasonable measures to ensure that these persons comply with the same confidentiality standards. In the digital context, this raises a critical question: can a lawyer mandate an AI platform, operated by a third party, to keep secrets in the same way a human employee or contractor would?
In England and Wales, the Solicitors Regulation Authority (SRA) governs legal practice. Under Paragraph 6.3 of the SRA Code of Conduct, solicitors must keep the affairs of both current and former clients confidential unless disclosure is required by law or the client consents. The SRA has issued guidance emphasising that lawyers remain personally responsible for confidentiality, even when outsourcing or using technology. If a solicitor uses an AI system and that system compromises client data, the solicitor and not the AI provider is liable. There is a duty to assess the risks of data breaches, conduct due diligence on service providers, and maintain safeguards that align with professional obligations. The bottom line is that confidentiality cannot be outsourced.
The California State Bar's Committee on Professional Responsibility and Conduct (COPRAC) has taken a firm position on the use of generative AI. In a 2024 advisory opinion, COPRAC stated that lawyers must never input confidential client information into any generative AI platform that uses such data for model training or to generate responses for other users. The opinion recommends that lawyers consult IT professionals before using any AI tool, carefully review the platform's data storage and retention policies, and avoid inputting sensitive details unless robust safeguards are confirmed. COPRAC's position sets a high bar: the duty of confidentiality is non-delegable and must be maintained regardless of the technology used.
Despite these risks, AI tools can still be used responsibly if appropriate safeguards are in place. Legal professionals should adapt their practices to maintain confidentiality while leveraging the productivity benefits of AI.
Practical steps include:
1. Anonymise Inputs
2. Disable Memory and Data Sharing
3. Consult IT and Data Security Experts
4. Establish Internal Policies
Generative AI is undoubtedly a useful tool for legal professionals. However, the duty to protect client secrets is not optional, negotiable, or secondary to speed. The duty of confidentiality is not a burden to be set aside for technology; it is the very foundation of trust between lawyer and client, and it must remain inviolate in the digital age. As AI becomes embedded in legal workflows, the legal professional must adapt without sacrificing their core values. Efficiency will shape the future of law, but vigilance will define its integrity.
For more information, enquiries, contributions and article submissions
Aduragbemi Odubela is an MRes student at Leeds Law School.