Tiled background
Leeds Law School

Who Governs the Machines? Understanding Global Efforts to Regulate Artificial Intelligence

From ChatGPT to self-driving cars, artificial intelligence (AI) is no longer science fiction. It is embedded in how we work, learn, commute and even how laws are enforced. But as these systems become more powerful and pervasive, an urgent question emerges: Who governs the machines? This isn’t merely about tweaking algorithms or writing better code. It’s about crafting governance frameworks that protect people, prevent abuse, and guide innovation toward the common good.

Artificial intelligence

Why Global AI Governance Matters

AI systems cross borders with ease. A model built in California may be fine-tuned in Kenya and deployed in a UK hospital. But what happens when it violates EU privacy laws or runs afoul of Chinese content moderation rules?

Consider the case of Clearview AI, a U.S. firm that scraped billions of facial images online and sold them to law enforcement agencies. Though legal in the U.S., it violated EU and UK data laws, leading to multimillion-euro fines and bans on its use in Europe.

Without global rules, risks like algorithmic discrimination, deepfake scams, or opaque chatbot decision-making become harder to contain. At the moment, there is no unified global policy or regulation governing AI.

Infographic reads: 'Core AI Governance Principles'

Designed by Aduragbemi Odubela

The Shared DNA of Ethical AI

Across jurisdictions, common principles consistently emerge:

  • Human rights and freedoms
  • Transparency and explainability
  • Accountability and oversight
  • Safety and robustness
  • Fairness and non-discrimination
  • Data privacy
  • Societal well-being

These aren’t just buzzwords. They are increasingly embedded in legal instruments spanning from the EU AI Act to the Organisation for Economic Co-operation and Development (OECD) AI Principles.

Risk-Based Regulation: The EU’s Model

The EU AI Act is the world’s first comprehensive legal framework for AI. It categorises systems into four tiers: from “unacceptable risk (e.g., social scoring, manipulative AI), High, Limited to minimal risk” tools.

AI systems deemed unacceptable are banned within six months of enforcement. By contrast, “high-risk” systems like biometric ID or legal automation face stringent obligations around data governance, documentation, and human oversight.

Infographic reads: 'EU AI Model'

Designed by Aduragbemi Odubela

The US: Voluntary and Flexible

The US approach leans on voluntary standards. The National Institute of Standards and Technology (NIST) AI Risk Management Framework  offers flexible, sector-agnostic tools. It is structured around four key pillars:

  1. Govern – Set internal AI policies
  2. Map – Identify AI systems and risks
  3. Measure – Evaluate performance and harms
  4. Manage – Mitigate and monitor

Though non-binding, it is gaining international credibility. International Organization for Standardization (ISO)’s AI Management System Standard (ISO/IEC 42001, 2023) complements this soft law model.

China: Strategic and Strict

China has positioned AI as a strategic national asset. Its approach blends innovation with tight control with laws that cover:

  • Interim Measures for Generative AI Services
  • Deep Synthesis Regulations that regulate both content and infrastructure.

The AI Security Governance Framework released by TC260 in 2024 underscores this, classifying AI risks as “inherent” (from the technology itself) and “applied” (contextual risks). China's governance remains centralised and aligned with state interests.

The UK: Innovation First, Regulation later

The UK favours a “pro-innovation” stance. There’s no standalone AI law yet, as the UK government encourages existing regulatory bodies to oversee AI use within their respective domains. Regulators like the Solicitors Regulation Authority (SRA) and Medicines and Healthcare products Regulatory Agency (MHRA) apply existing laws to AI tools in legal and medical fields. In legal services, the Solicitors Regulation Authority (SRA) applies its existing Code of Conduct to AI use. Solicitors must maintain integrity, competence, confidentiality, and act in clients' best interests. The Law Society’s guidance on generative AI stresses transparency with clients, monitoring bias, understanding AI tools, and ensuring compliance with UK GDPR. It highlights the importance of professional accountability, especially when using AI for legal research, document review, or contract analysis, which will guard against risks like AI hallucinations or inaccurate citations. 

In Ayinde, R v The London Borough of Haringey, decided in April 2025, a barrister submitted legal arguments that relied on five fabricated cases. The judge strongly suspected AI use and referred the barrister and instructing solicitors to their respective regulators (Bar Standards Council and Solicitors Regulation Authority) for "appalling professional misbehaviour." This sparked disciplinary review and underscored why legal professionals must monitor for “AI hallucinations” and protect their integrity.

Beyond Nations: Multilateral AI Governance

The OECD AI Principles (2019) laid the groundwork for most international cooperation. Global Partnership on AI (GPAI) now operates under OECD, with 44 members. Its multidisciplinary network of 500+ experts supports inclusive, rights-based AI globally. Yet only one African member—Senegal—highlights the diversity gap in global governance.

The Council of Europe’s AI Treaty

In 2024, the Council of Europe adopted the Framework Convention on AI and Human Rights, Democracy and the Rule of Law  which is the first binding international AI treaty. If ratified, the Convention could prevent ethics dumping — a practice seen when companies test emotionally manipulative AI in developing countries lacking digital rights protections. This raises real concerns: Should a voice Bot that simulates empathy be legal in Ghana if it’s banned in France?

Who Should Govern the Machines?

The urgent need for harmonised AI governance is no longer theoretical; it is legal, practical and immediate. As AI reshapes how law is practised, the question is not just whether the law can govern intelligent systems, but whether lawyers are prepared to lead that governance. This is a challenge to law students and legal professionals alike: move beyond passive awareness, deepen your AI literacy and become active shapers of this evolving frontier. The future of justice may depend not only on the rules we create but on who has the courage to step forward and craft them.

Interested in contributing or sharing a relevant case study? please get in touch. We’re always eager to feature new insights on the Leeds Law School Legal Tech Blog.

 

CONTACT SOPHIE NKWAP

For more information, enquiries, contributions and article submissions

More from the blog

All blogs