AI & Law: Are We Witnessing the Death of the Legal Author?

The contemporary scenario: AI, life, and law

It is commonplace now to hear about the use of artificial intelligence (AI) – especially, large language models (LLMs) such as ChatGPT – in basically every aspect of life. Discussion about the extent, efficiency, and ethical aspects of this usage is truly widespread.

The legal sphere is no exception. AI has been consistently and increasingly used for various purposes, such as include document review, smart contracts, legal research, due diligence, predictive analytics, and “legal” chatbots. It has also been used to argue in front of courts of law by lawyers (infamously citing fake cases), to automate repetitive administrative tasks by governments, and even to aid the process of (or even replace) law-creation at local and federal levels.

Legal theory and philosophy in the advent of transformer models of AI

In the face of all the technological developments, made possible by the advent of transformer models of AI, several positions have been adopted across the board in legal theory and philosophy. On the one hand, there are those who strongly support the use of AI and LLMs for traditional legal tasks, even arriving to support the gradual replacement of human intervention in them (at least, the most menial ones). Reasons for this support vary from efficiency and elimination of human biases to superior epistemological capacities by means of superior computational capacities.

On the other hand, there are those who raise strong concerns not only about the current and eventual usage of AI and LLMs for traditional tasks but especially about the possibility of replacing humans in most legal tasks. Reasons for concerns vary from current limitations in technology resulting in “hallucinations” and reproduction of human biases under the guise of objectivity to the impossibility of ascribing responsibility to an entity that does not make “decisions” in the traditional sense.

For the most part, these positions and the related heated debates circle around a normative question: “Should AI create and interpret law?” Much less attention has been given to a different, albeit previous, question: “Can AI create and interpret law?” In other words: is AI capable of producing outputs that can be deemed as “law” (at least, law as we know it)? Can AI be a “legal author”?

“Can AI create and interpret law?” Descriptive and conceptual ways to give answers

One way to answer these questions is descriptive, entailing extensive testing of commercially available large language models. To a degree, this testing was already conducted on human and machine benchmarks. In a systematic and rigorous way, the research confirms the general feeling of the practical usefulness of ChatGPT and similar commercially available products for lawyers – large language models can, in fact, produce outputs that are convincingly legal and display all signs of what we usually call legal reasoning.

The other way of answering these questions is conceptual. Namely, philosophical notions of creation and interpretation of law are based on certain fundamental premises. These premises contain the conditions of possibility for something to count as ‘law’ and ‘law-creation’. For one, according to most, contemporary law-creation entails that there is a legal author in a position of authority to create text that is legal in nature. Secondly, we tend to understand the legal product of the determinate legal author as an artefact – a text or sign in which the intention of the authority was materialised and brought to life, with the purpose of regulating behaviour. To even start discussing whether AI can create law and whether AI-made law can be interpreted, it is necessary to establish whether AI fulfils the conditions for the possibility of law-creation, at least in the way these are dominantly conceived in legal theory and philosophy.

Does AI fulfil the conditions for the possibility of law-creation?

We can follow several steps to try and answer this inquiry. First, we need to define what does it mean to be a creator of law or legal text (i.e. what does it mean to be a legal author-ity) and what does it mean for something to count as a “law” or “legal text”. Generally, “law” is conceived as an artefact: a product of intentional creation by a certain subject or group of subjects, designed to perform a certain specific function. This also goes in line with how an authoritative relationship is generally conceived: a three-part relation between an author, an output, and an addressee. Two of the main questions to be answered here: does an “author” need any factual intention in the production of the output for it to “count as law”, or would it be sufficient for it to be presupposed in some context, if any? Does the addressee have any role in the conformation of the characteristics of the output through their interpretation of it, or is it to be considered a mere passive part?

Second, we need to explore generative AI, in particular large language model AI, with a specific focus on its functioning and its outputs. From 2023 on, there have been extensive probing of LLMs from legal scholars, and the existing studies demonstrate impressive capabilities in various domains of the legal profession: as they “understand” words in sentences and their meaning by “focusing” on words and their contexts, the models “understand” language and output meaningful language. This does not seem surprising, giving the fact that law is (closely related to) language. It is, however, dubious whether the development of the current models will ever lead to the development of intelligence, reasoning, cognitive abilities, intentions, or mental states: current studies show that something as “consciousness” is nowhere to be found in current AI. As such, something akin to “intention” seem also nowhere to be found in current AI, nor behind its outputs.

Third, we need to consider two conclusions that seemly follow from this: (1) the outputs of a LLM cannot be considered law because, in lack of anything that we might reasonably call an intention in ChatGPT, we cannot say that the textual outputs resembling legal texts produced by it can count as law; and (2) if there is no intention to identify behind a text produced by a LLM, then a determination of the meaning of the outputs of LLMs cannot be considered interpretation. However, these conclusions seem counterintuitive. For one, we already know of instances in which real-life legislators used LLMs to draft entire bills and at least one instance where one of those bills has passed and is now part of a legal system. Human interpreters will be faced with the task of interpreting those texts.

A way out: ascribing human intention to AI outputs or rejecting intention as a condition for law-creation

In our opinion, there are two possible ways out of this: either (1) the claim that intention is a condition of the possibility for law-creation and law-interpretation is true and human intention should be in some way ascribed to the outputs of LLMs, or (2) the claim that intention is a condition of the possibility for law-creation and law-interpretation is false.

The former will guide us through the analysis of different scenarios of seemly shared authorship between human and AI, and the conditions under which the intention can be extended or “borrowed” from human to AI. The latter will guide us through the analysis of different scenarios where intention might not be needed, either because it is not necessary for artefact creation or for the functioning of legal outputs, or because the only thing we truly need is for the text-readers to presuppose it (even if it is not, and cannot never be, real).

 


To read more about this, check out this recent article: Rabanos, Julieta A. & Spaić, Bojan, “The Death of the Legal Author.  Authority, Intention, and Law-Creation in The Advent of GenAI”, Law and Philosophy (2025). https://doi.org/10.1007/s10982-025-09524-9

My work on this blogpost has been supported by Grant RYC2023-043168-I funded by MICIU/AEI /10.13039/501100011033 and by the FSE+

My work on the cited article results from research conducted within the Horizon Twinning project “Advancing Cooperation on The Foundations of Law – ALF” (project no. 101079177). This project is financed by the European Union.


SUGGESTED CITATION: Rabanos, Julieta A: “AI & Law: Are We Witnessing the Death of the Legal Author?“, FOLBlog, 2025/6/3, https://fol.ius.bg.ac.rs/2025/06/03/witnessing-death-of-the-legal-author/


Licenced under CC BY-SA 4-0