CONTACT SOPHIE NKWAP
For more information, enquiries, contributions and article submissions
Leeds Beckett University - City Campus,
Woodhouse Lane,
LS1 3HE
We must come to a point in legal practice where AI ethics is regulated rather than contemplated as a mere consideration. We cannot outsource legal obligations.
In this blog, explore the issues surrounding AI replacing legal representation.
The deployment of 'Artificial Intelligence' (AI) tools in legal practice has sparked widespread discussion and more critically, whether the emergent technological shift threaten fundamental legal rights. The recent decisions of Frederick Ayinde v The London Borough of Haringey and Hamad Al-Haroun v (1) Qatar National Bank QPSC and (2) QNB Capital LLC, emphasise on the use of AI in court proceedings to which, those who use AI tools to conduct legal services have a 'professional duty of gatekeeping' (Mata v. Avianca, Inc., No. 1:2022cv01461 - Document 54 (S.D.N.Y. 2023)).
AI tools can perform many tasks traditionally handled by lawyers and are increasingly being used in legal contexts for tasks such as drafting contracts and legal memos, conducting legal research, assisting in case law analysis, as well as offering initial guidance to self-represented litigants. Such technologies offer rapid, cost-effective support, and lawyers burdened by high caseloads may use AI tools to manage tasks more efficiently, however, can replacing human legal intelligence with AI systems potentially undermine the right to meaningful legal representation? Can AI tools replace a client's expectation of efficient legal presentation?
There is an increase in legal cases where the use of AI, particularly in citing fictitious or non-existent cases is in contention. This raises questions regarding the implications of such actions to the right to legal representation. In Zzaman v Commissioners for His Majesty's Revenue and Customs [2025] UKFTT 00539 (TC) Tribunal said: "...Otherwise there is a significant danger that the use of an AI tool may lead to material being put before the court that serves no one well, since it raises the expectations of litigants and wastes the court's time and that of opposing parties."
There have also been cases in other jurisdictions where submitted briefs contained "bogus" artificial intelligence generated research, comprising fake citations and quotations for example, Mata v. Avianca, Inc., No. 1:2022cv01461 - Document 54 (S.D.N.Y. 2023) and Lacey v. State Farm General Insurance Co CV 24-5205 FMO (6 May 2025). In Canada, cases like Zhang v. Chen [2024] BCSC 285 and Ko v. Li [2025] ONSC 2766, further buttress the universality of the situation.
The right to a fair hearing is enshrined in numerous national constitutions and international instruments, and central to this right, is access to effective and competent legal representation. Consider whether the right to legal representation would have been breached in the following circumstances:
In all cases, will the lawyer's duty towards the client to ensure effective legal representation have been breached, or will it be an excuse of 'wilful ignorance'?
While AI can assist in generating information, objectively, it lacks several human attributes essential to legal advocacy, including judgement, ethical responsibility and the much-needed human empathy, all relevant to equity, natural justice and good conscience. While lawyers cannot be refrained from applying AI especially for legal research and preparation of court documents, relying on AI without robust oversight may result in 'procedural injustice'. AI tools should therefore not replace qualified legal representation, particularly where rights and freedoms are at stake. One can only imagine the jeopardy that may be occasioned to an individual standing trial for criminal matters in such situations.
Relying on AI tools such as ChatGPT without verifying authorities could result in misleading conduct, particularly in court proceedings. While the Solicitors Regulation Authority (SRA) Code of Conduct does not prohibit the use of AI tools, it mandates professional judgment and accountability. If a solicitor were to use AI to cite legal authorities, they would have a professional obligation to verify the accuracy of those citations, or risk breaching multiple ethical duties, such as integrity, competence, and the duty not to mislead, either knowingly or negligently. According to Paragraph 1.4, "You do not mislead or attempt to mislead your clients, the court or others, either by your own acts or omissions or allowing." Even if the solicitor was unaware that a case citation was fictitious, failure to check it before submission could amount to negligence or recklessness.
SRA Principles also mandate lawyers to act with 'integrity' in its 'Principle 5'. The Principles are central doctrines of ethical behaviour expected of all who provide legal services. If a solicitor includes a 'fabricated' case from an AI tool in a submission or argument, and fails to verify it, they may be risking a regulatory breach for lack of integrity. Indeed, substituting qualified legal counsel with AI raises concerns about due process. Situations where lawyers then cite misleading cases in client representation could show gross incompetence and negligence. No matter the jurisdiction, such conduct brings the legal profession into disrepute.
AI tools like ChatGPT are best seen as complementary, not a replacement, and must be used responsibly. We must come to a point in legal practice where AI ethics is regulated rather than contemplated as a mere consideration. We cannot outsource legal obligations. Maintaining human oversight must be prioritised as AI tools should supplement, not substitute human legal services. Lawyers and law firms must ensure transparency and importantly, it is also a question of whether clients should be informed when AI is being used in legal contexts. The legal profession must provide clear guidelines on permissible use of AI in legal contexts, especially, for establishing accountability in the context of legal representation. This will mean setting clear lines of responsibility when AI is deployed to support legal representation in whatever form. It is immaterial whether such guidelines are national or international in context, the protection of vulnerable citizens who are at the mercy of the justice system must be seen as the goal. The future of legal practice must remain grounded in human judgment, accountability, and ethical responsibility. These are the hallmark of the ethics of legal professional conduct.
For more information, enquiries, contributions and article submissions
Musah Ahmed, Esq. is a Legal Practitioner and Head of Ahmed Legal Consult.