HAPL #3 – Artificial Intelligence and Legal Theory (ft. Domingos Farinho)

The guest for this third episode of the HAPL podcast is Domingos Soares Farinho, currently professor auxiliar at the Faculdade de Direito of the Universidade de Lisboa, member of the Centro de Investigação em Direito Público (CIDP) do Instituto de Ciências Jurídico-Políticas da Faculdade de Direito da Universidade de Lisboa, and member of Lisbon’s ALF team.
In the first part of this highly interactive interview with Julieta Rabanos and Bojan Spaić, Farinho introduces his academic genealogy and discusses with the hosts about the relation between legal theory and legal practice, focusing on Portugal’s experience on the matter and his personal experience as a practitioner. In the second part of the interview, Farinho discusses with the hosts about the relation and influence between technology development, law, and philosophy of law. The debate goes on to touch upon regulation of digital content, artificial intelligence, science fiction, production of texts by artificial intelligence, autonomy and liability, criteria for legal personhood, and much more.

Below, you can find the full transcription corresponding to this episode*:


Domingos Farinho:
Again, testing 1–2–3, hello. Join us for the podcast this week on this radio. OK, 28.

Bojan Spaić:
He has very good podcast voice.

Domingos Farinho:
We’ll be doing the rounds on Terrence Malick’s The Thin Red Line.

Bojan Spaić:
We’re not even doing the introduction here. His voice is better than mine, far better than mine for a podcast. We’ll just let him talk for an hour and then stop.

Domingos Farinho:
OK, and we are closing in for today. Go see the movies. Bye bye bye.

Bojan Spaić:
This voice, which is a very podcast voice… But anyhow, our guest in this episode of the podcast is a recent friend. We met recently in Lisbon, maybe four weeks ago. We have very good, and we’re working on this project that the podcast is part of. We had very good contacts with Lisbon, but it was only a couple of weeks ago that I met Domingos Farinho.

We were communicating about this conference for a bit longer and we were discussing the prospects of him coming to Belgrade, So, he’s finally here, and yesterday he did a lecture on the AI regulation in the European Union and a Hohfeldian analysis. And already today we are doing an episode of the podcast with him. Now, the podcast is… and Jules will explain this part.

Julieta Rabanos:
Yeah, the podcast does not have a name yet. We are conducting a poll that started with the previous episode when we had the Marco Segatti. We have two names, two possible names for the podcast. The first one is Incomplete Theorised Disagreements, and the second one is Heavily Accented Philosophy of Law. So, what would be your vote?

Domingos Farinho:
Well, first, thank you for having me. I’m having a great time here. Like Bojan said, I’m a recent addiction to the project, but it’s great to be here. We’ve been preparing this for some time. Regarding your question, Julia, I would say the second, because I think it’s more provocative. So, I would go with the second, but that’s just me.

Bojan Spaić:
So, we’re tied for now. We have Segatti claiming that the first option is a better option and Farinho claiming that the second option is a better. We’re going to go with the subtitle.

Julieta Rabanos:
Yeah, I alSo, think that the second option is the better option, but then Segatti alSo, told us that maybe we can use one of the two of them for the title and then put the other one in the subtitle, and alSo, another possibility was doing a cycle called, for example, either Heavily Accented Philosophy of Law or Incomplete Theorised Disagreements, and then reserve the other one for the main title of the podcast.

Bojan Spaić:
It feels that this Heavily Accented Philosophy of Law will be more descriptive of the contents of the podcast, because we will always have heavy accents, but maybe we won’t always have incompletely theorised disagreements. Yeah, yeah. So, we’ll have to record the introductory episode.

Julieta Rabanos:
Well, we are still doing that. So, I think that we can adjust it, but…

Domingos Farinho:
I like the idea of subtitles. It’s a very common sense, moderate approach. I like that.

Julieta Rabanos:
OK, I think that we have a second vote for title and subtitle. We have yet to decide what this title and this subtitle would be, but then I think that we are going towards this idea of title and subtitle.

Domingos Farinho:
Titles are very hard. I mean, I was mentioning before we went live that I had this cinema podcast and we were discussing the name, I think for two months. So, it was really tough.

Julieta Rabanos:
So, OK. Well, thank you very much for being here. Maybe we can start with a question, a personal question, like in the sense of, well, who are you? Bojan introduced you, but basically what has been your path, for example in academia, or maybe not only in academia, maybe some non-academic things that you have done in your career?

Domingos Farinho:
OK, thank you. Well, I think I would say that the most stable professional endeavour I was committed to is academia, in the sense that I think only when I was finishing my PhD did, I only do one thing, which is doing the PhD. Other than that, I’ve been working for 23 years, and I always try to harmonise between academia and some other legal profession.

It has a lot to do with the Portuguese tradition, in which legal scholars were always doing something else, mostly being lawyers. So, there’s a tradition on that. Also, because there’s poor payment to academics. So, there’s that, there’s that. But other than that, to be completely truthful, I think it’s very good, even if you’re working in something that seems So, far off from practical matters, exterior of law, our philosophy of law, there’s a good thing to be working in the practical side of law.

So, basically, I did what most law students after graduation did. I did the exam for the bar, So, I’m a lawyer. I very much like being a lawyer. I can tell you that I like being a lawyer, like a practising lawyer, more than an academic, but I like it because I think they complement each other.

There are many theoretical cases that I come up, for example, to discuss with my students because there are problems, I want to research on in a paper that end up helping me come up with a solution for a client, and the other way around. There’s stuff I discover looking for solutions for problems to my clients, and I think, well, this is a great way to think about this problem in abstract.

So, I did that. I would say that in these 23 years I’m a professional, I’ve been a lawyer half of those, more or less. I alSo, did a bit of a detour on public administration, So, I worked for the cabinet in the government. This was a while back, and then I headed the department of the Ministry of Justice that worked in alternative dispute resolution. So, this was like three years. Very, very fine.

But mostly I work at university. I’m a legal scholar, So, I do what basically legal scholars do: I teach, which is kind of like a vocation for me. I mean, I remember entering the first day of classes and it felt like I was doing that all my life. So, I love teaching. And then researching. I have a monastic side to me. I’m a very lonely, not in the bad… I remember a person that loves solitude.

So, there’s a monastic side to me, and researching is very good for that. So, I have no problem in being like one week without seeing anyone. Now I can’t do that because I have three children, So, it would be bad for them. But I have no problem in being like a week without seeing a human being, just thinking about problems of the law. That’s that.

What else? Well, I think it’s interesting. I don’t know what other questions you’re going to throw at me, but whenever I talk about theory of law in the context of theory of law of laws, I always – and I said that yesterday when I started my talk – actually, theory of law for me is a late addition to my legal interests.

I studied out –  actually my PhD, which is more than 10 years ago, was on foundations, So, trusts and public interest. And if you ask me what is the one thing over 23 years that has remained constant in my interest, it’s the public–private divide, which I tended to look at… Now I can see this with more clarity than I did like 10 or 15 years ago, but now I look at it through the lenses of what I called regulatory matters.

So, basically, I think we live, as legal theorists or legal scholars, we live at a time where the divide between – especially in continental law – the divide between public and private is still very present, and every day it makes less sense. And basically, I think, in my case, I work out of Portugal because of the European Union law, but especially because that’s the way society is going.

The state, even if you’re talking about liberal states or more interventionist states, is pretty much doing something regarding people. It’s either giving them subsidies, or regulating their economic activities, or protecting their fundamental rights. And I think it’s wrong to look at it from just one perspective. So, I try to look at it from both sides.

And I think actually that’s what brought me to legal theory, because first I was doing a lot of work on regulatory or regulation of the third sector, social sector. Then I moved on to digital law for the exact same reasons: there’s a lot of state interference in the digital domain, which for me was strange. And then I started thinking that everything I was reading on regulation and regulatory powers was very like blah blah blah.

So, you have to balance, and there’s natural laws and values, and I was like, this can’t be it. And this is where David comes up. He says, look, it’s going to completely change your life if you go analytical. And I go, what are you talking about? “Read this.” So, he gave me these five key recommendations, and I thought, here’s a tool to do better regulatory law.

And I’ve been cultivating myself ever since. Now I’m very much into norm individuation, mostly actually because of David’s influence, but I’m also, very much into theories of interpretation. I would say that those are my two favourite topics in legal theory.

I spoke too much. I’m sorry.

Bojan Spaić:
No, you definitely didn’t. I enjoyed it quite a lot. So, the… Duarte is preacher.

Julieta Rabanos:
He was an analytical preacher.

Domingos Farinho:
Yeah, yeah, that’s true. I think it’s like there’s this word in English, like a zealot.

Bojan Spaić:
Yeah, proselytiser of analytical film.

Domingos Farinho:
So, if you’re talking about the preaching in that sense…

Yes, and I like him for that because, well, mostly because I know him. He’s my friend. I have seen people that, when he goes at them, they don’t know him and they say, “Oh my God.” So, I’ve seen very harsh things happening in David’s presence, but to me I have always thought of it like a source of inspiration and influence.

It has been very good for me because, I mean, David has been doing – which is, I mean, he also, comes from an administrative law background. His master’s thesis was in administrative law. And it’s very interesting to compare both theses because one is very analytical, the PhD, and the other is just your traditional administrative law in the Portuguese tradition.

And I think that change in the path really inspired me. Like, I can do this. My kids, I can do it from my master’s to my PhD, but I can do it from my PhD to the rest of my career, and I’ve been trying to do that.

Bojan Spaić:
We have been discussing with our previous guests a bit of this, and you mentioned it in the first part of your reply. What would you say is the relation between legal theory and legal practise in Portugal? Are they far apart? Do they complement each other?

I mean, I understand that practically most of you – and I know Pedro Monis Lopez, and I know Jorge Sampaio, and I know David Duarte very well for 10 years now, I think – and I know that they do also, some practical stuff, they do legal practise. But on one hand I always thought that that would inform or influence heavily their theoretical work and the other way around.

But what was curious for me, especially in David’s situation, is that it seems that the practical part pushes him to be more kind of logical-form formalist in the theoretical part.

Domingos Farinho:
Yeah, that’s an excellent question. Well, let me use your examples, even for people who are listening to us and that don’t know them. Let’s use them like stereotypes or archetypes.

Take, for example, David. David, I think, is one of those cases where, yes – I mean, I was a colleague of David as a lawyer. I think 10 years after being a colleague in the university, we ended up working for the same law firm. We were actually called the bats because all the rooms would have this, of course, normal light that people used to work, but in our room – we were roommates at the law firm – we just used these tiny lamps over our desks, and the rest of the room would be in complete darkness always.

And people would go like, “Oh my God.” But why were we together? It was like meant to be. That was very good. I think that I knew David from college, but we didn’t talk that much, and then in 2005 we were colleagues for a short period of some months, actually almost a year, and it was very good for our friendship.

And David is one of those kinds of… I think he reacts to the legal practise in Portugal with the sense – and you have to know David So, you in the room will understand me, the rest of the listeners won’t, but maybe they will get the feeling – that David, addressing a problem of legal practise, goes like, “Oh, let me see the Official Journal. Oh my God, this law, this is awful. These are all incompetent people.” Then he goes on to read some book written about the law, and he goes, “Oh my God, this is awful. This is incompetent.”

So, his legal relation with the practise is like everything is wrong because people don’t have enough theoretical training to be very rigorous. So, I think there’s a one-way influence in a sense. There’s of course a two-way influence in the sense that the legal practise drives, as you said, and I completely agree, the need to even be more of a legal spirit, to dive deep. But I think there’s a feedback, of course, in the sense that he will use that – even if every other person is completely in a different way – he’ll use that in his work.

And I remember we worked together in public procurement. Actually, David was usually the guy that you would come to when, for example, you wanted to write the documents for the procurement, which have to be very, very rigorous and very accurate. And this is something very analytical, and the rules after that, the normative propositions have to be very clear.

Many of the things I was yesterday saying that seemed very modest proposals, but very often, which is: you have one text, from that one text you get one rule, and from that one rule you get one legal position. If that would be possible, it would be David’s dream, and my dream also, So, I can understand.

So, I think that really… This is a kind of – and I would say this is not very common. I think that David is alone in the legal Portuguese landscape in having this kind of relationship between legal practise and legal theory.

Pedro and Jorge… Jorge I know a little less about his legal practise. I know he works for the Ombudsman in Portugal. But Pedro is also, a lawyer, like me and like David. David now is more on the counselling part of the law, not So, much as a lawyer in the law practise, but Pedro is kind of like me. And Pedro, I think, is more on the lines of what I think is my influence also, or the way I react to the connection, is pretty much what I was saying at the beginning.

I think many of the problems that Pedro is researching in legal theory help him to better reply to legal problems in his practise. So, say, for example, now and for some time he’s been doing a lot of work in the mix between behaviouralism and legal theory, indeterminacy, and this is something that can help you solve a lot of practical problems, at least gives you some intuitions on how to conduct yourself.

So, I would say there’s that influence, and then I’m sure, because I’ve discussed this with him a couple of times, that there were moments in his law practise where a specific legal problem gave rise to, “This is a very good idea to discuss in a paper.” And this, I think, is the most common position, not very common in Portugal for a reason: we don’t have a lot of… There’s not a very analytical legal theory tradition in Portugal.

So, you have a lot of what you would call natural law tradition doing legal theory, both in Coimbra and in my university in Lisbon. You don’t have – I mean, it’s the first time that we have such a well-structured legal theory group that presents itself at least dominantly as an analytical approach to legal theory. So, I would say that the connection is still… I mean, it’s not that impressive because you have very little people working in that area.

Julieta Rabanos:
You mentioned this constant feedback that you find between legal theory and legal practise, in the sense at least in the works, for example, of David or Jorge or Pedro. But can you give us maybe an example in your expertise, in the sense of the things that you did in your career in which you did this? In the sense, for example, that you had a case – obviously without mentioning specific details – but then… So, the listeners can see it in practise, like…

Domingos Farinho:
Yeah, OK. Now I can give you two examples, one which connects to yesterday’s talk and another from another area of law that I do a lot of. Basically, as a lawyer I do mostly administrative law, especially special branches of administrative law, mostly urban planning and then digital law, basically data protection, which is the one that has some economic interest in Portugal. The others are just very theoretical still, maybe with the Digital Services Act that will pick up.

But if you’re digital law and you want to do legal practical stuff, it’s mainly, at least in the public law area of regulation, it’s mainly data protection. So, starting with digital law, yesterday actually I was trying to show how I see the connection in legal theory and other branches of law. And for example, for me, that I am now arriving at artificial intelligence, coming from data protection, digital services, one thing that for me was kind of obvious is like, when I read the AI Act proposal, I was like, “Oh my God, this is going to be crazy to interpret.”

There’s going to be a lot of confusion because there’s a lot of rules – I mean, normative propositions here – that seem to be talking about the same thing. And I’m very much influenced by recent papers by actually David and Pedro on the idea that you must have a very strong connection between normative texts, rules coming out of rules and principles coming out of those normative texts, and legal positions.

That really helps you in then applying the law because you can go at it with a certain sense of: I mean, you did the reconstructive effort and now you know what you’re talking about. So, there is no… I mean, if someone comes at you in court or in a legal opinion saying, “No, but you’re saying that, but you’re not really talking about the same thing as I am,” you go, “No, no, no. I’ve dealt with that. So, when you say article, something says something, it doesn’t, because of this and that.”

So, I think, for example, in interpreting – and this example is taken from the AI Act proposal, but you could use it, for example, to interpret the new regulations on housing in Portugal or something like that – this is one of the, I would say, one of the domains, and this is why I’m So, interested in theories of interpretation and norm individuation.

It’s because I think in the practical domain the problems I see the most are people talking about rules and even applying them in court, and they have no idea if they are really applying the same rules. So, this is very common. Just to give you a teaser of my presentation on Saturday – I don’t know when people of course are going to listen to this – but one of the things I’m going to talk about on Saturday is based on a very recent paper called “Generative Interpretation”, which is a proposal by some legal scholars in the United States saying, let’s put ChatGPT giving us what the problem is, like fulfilling the promise of textualism.

So, textualism, coming from Wittgenstein, is like you have to interpret what the text means according to the language form, So, let’s go to dictionaries. And the example they gave in that paper was very interesting: after Katrina, there was this billion-dollar lawsuit that actually only revolved around one problem. What does it mean, the word “flood”?

So, these are people’s lives, these are billions of dollars. There’s this humongous lawsuit and the only thing that scholars, legal scholars, lawyers are discussing is what is the meaning of “flood”. It’s very interesting because the judge that decided the case actually quoted from five different dictionaries, different sociological studies, to actually fulfil the demands of textualism, which is: look, I’m reading “flood” like this because it’s like everyone reads “flood”.

And he was immensely criticised, papers coming from all places saying, “He doesn’t know what ‘flood’ means,” which is kind of like – I mean, if you ask anyone in the room, in the café at night, “Look, there is a flood,” no one goes like, “What do you mean a flood? What do you mean by flood?” I think people understand what “flood” means.

But in that specific case, and this is a very interesting legal problem that doesn’t happen in natural language commonly, the thing is: does “flood” account for human-provided flooding or not, or does it exclude, is it just natural causes? So, when you’re talking about…

So, I think, in a way, I went to legal theory to help me solve the blah blah blah that I mentioned, and David also, likes to use that very real expression “blah blah blah”, which is, I mean, there has to be some way in which we lawyers, that have been talking about this for ages, can contribute to dispel this kind of ambiguity.

In that I would say in a very similar way to what you’re doing in the natural sciences. Of course, we all know that’s probably not… we can’t achieve the same degree of accuracy, but we can come close. And I think that’s the most.

That would be my first example. The second topic, which I find interesting and with a certain relation to legal practise and that uses the two, is – this is more uncommon, at least in my experience – but it’s something that I like a lot, which is the matter of precedent.

Precedent, which for a continental system is not usually something that you are very concerned with, but in administrative law actually precedent has a very funny way of working. Actually, David’s master’s thesis was on that, which is: if the public administration, within their margin of discretion, commits to a certain course under specific conditions, that’s a precedent, and it is protective under the principle of trust, or the bona fides, whatever you…

And you basically can only change it unless the conditions change, and you say it like this and it’s simple in legal practical terms. But one of the discussions that you end up having in court or with your clients and the other parties is: what does it mean for the conditions to have changed, and how does it affect the precedent that you had?

So, it’s kind of like this rule that no one was paying attention to, and now that the conditions have changed and the rule has changed, or the way to apply the rule, no one really knows. And so, people say, “You can’t change that.” So, I think another area where legal theory has helped me is trying to be very precise about the ways you interpret what “precedent” means and how you go about delimiting, in a way, what you can do when conditions to exercise your discretion change.

That’s, I would say, another good example, at least in my practise, that’s helped me.

Bojan Spaić:
I always also, think that it’s… kind of philosophy of law and legal theory are far more, or have the potential of being far more informative than people give them credit for, especially analytic philosophers of law.

Analytic philosophers of law have this tendency of emphasising the futility of philosophy of law, but actually the very adoption of an analytic mindset – and you don’t find this analytic mindset often, you basically find it in philosophy. You rarely find it, I mean, you rarely train it in sciences, and if you read, for example, sociological literature, they’re all over the place with concepts.

I mean, you could say, for example, to philosophers that they never solve their conceptual problems, but on the other hand they’re at least clear about not solving their conceptual problem. And it’s just usually in sociology, especially in kind of… in empirical sciences, but I think also in legal practise.

Lawyers pride themselves often on being rigorous, but if you apply analytical philosophical rigour to their reasoning and their analysis, it usually falls short of philosophical rigour, which is kind of wondrous, and I pretty much find myself agreeing with you that these are some of the direct – or at least influences should be more direct – and not in a substantial sense, in that somebody can explain to you definitely what the precedent is or what is the role of the precedent in European continental systems, but definitely that adopting a mindset of analyticity in this case can…

Bojan Spaić:
Actually bring about good consequences in legal reasoning and legal interpretation and legal practise, but what I wanted to ask you about is: what is the relation the other way around? And it’s interesting to me to ask you this question because you’ve basically been following, with your interests in law, these things that have been happening in technology in the last 10 or 15 years at least.

And I have this impression that, while it may be that legal practise doesn’t respond well to legal theory at times in philosophy of law, it’s also the other way around. It also seems to me that, in terms of the movement of technologies, philosophy of law is not doing a good job in following, or at least in being prompted with new questions and new issues that arise out of new technologies and their regulation, and has kept most of the same problems as dominant problems for quite some time now.

Domingos Farinho:
No, I agree with you. And let me explain why, but I just want to fall back to something that you said just to say something I think probably controversial. But I think one of the reasons why the influence between an analytical approach is not something that sometimes you see more has to do, at least in my experience, with legal professions.

I always found that – this is probably the controversial part – that there is one legal profession where an analytical approach would be, I would say, perfect, which is judges, like adjudicating, because you are in a neutral, you’re supposed to be in a neutral point, and I think the analytical approach is just that. It gives you, like, it cuts like a knife through butter.

And basically, if you’re a lawyer, especially at the bar, I think there’s a tendency – of course, there are also other factors at play here, of course, certain legal culture in many countries – but if you’re a lawyer, you are used to having to defend everything and its opposite, because your client might want you to do that. And I’ve always a desire – I know that the legal, the natural lawyers listening to this will, if any, will probably go against this – but it’s very good to be a natural lawyer if you are a lawyer, because, I mean, if you have to defend something, you just go, look, this position that I’m going for has this value besides it.

And then if you have to defend the opposite, you find another value and you say this value is sustaining this position. And of course, for me that have a moral relativist, there’s values for everyone. If I want to defend whatever, there’s probably a value that supports my position, and if you can bring that to the law, that is very good for you, because on Monday you have to defend something and on Tuesday, after, So, I can understand that.

And I always felt that that’s a problem of having a lot, for example, of academia that there’s also a legal practitioner, and it is something that combines very well. Why should I be analytical if, when I take off my academic suit and I go out to being a lawyer, that wouldn’t suit me very well? I would have to juggle my brains.

So, that’s that. OK, So, going to your question, which I now forgot, but I was going to answer, which… yeah, yeah, yeah, yeah, no, yeah. In the digital domain, that was the thing. No, I think there’s a problem. I can tell from my own experience. One of the reasons that I have been feeling more in need of connecting legal theory and legal practise when dealing with digital law…

In my case, I didn’t feel that need, for example, when I was doing research on data protection. I thought that the usual frameworks of legal theory were pretty good to the kind of practical problems I found in it. It was like a problem of, again, normal evaluation, interpretation, balancing, So, a lot of balancing in data protection, and there’s excellent legal theory on that, responding to this.

To this day, for example, in my university, some private just published – you saw – it’s easy, which is basically on balancing. And of course you can apply it. I’ve just written a paper actually on metadata using and quoting some pieces, and I felt very at ease with that. It’s very easy to combine both dimensions.

Again, I’ve been doing a lot of work on the Digital Services Act, and there’s again a lot of balancing, there’s a lot of fundamental rights. I would say, well, there’s nothing new here. Where I felt the need to actually go for legal theory and feel that there’s probably new ways that I don’t know of dealing with these problems is in the example I just gave you, which is artificial intelligence.

And, for example, this paper – I’m a little bit in all of this paper because I just found it like a week ago – and it’s one of those things that everyone has felt, that when you find the paper and you go like, oh my God, this is the missing piece I was looking for. Someone had probably written this and I haven’t found it yet, and now I did.

Because in my mind it made complete sense, when I actually was preparing my presentation, that probably someone has just thought the same thing I did, which is it would make sense to put algorithms trying to solve the problem of interpretation. So, how should we do that?

And then, of course, I found the paper on generative interpretation, and it’s pretty much – I don’t agree with everything, especially because a lot of it is in the Anglo-Saxon context – but a lot of it is pretty much what I would like to say.

And one thing I like in academia is that if you can quote someone that has developed an idea that you thought of, quote the person. They have done the job for you. There’s no problem, no envy. Just say that this is what I think, move on, and try to add something original if you can. If not, just present it, because probably people don’t know it, and that’s alone just something that you’re helping with, with your work.

That is, I think that I… well, I agree with you. I think artificial intelligence, because it demands So, much of us – I mean, I as a lawyer, my main problem these days is what is the threshold to the amount of technical non-legal information and the education, in the sense that I must…

I’ve been reading about artificial intelligence. I’ve been talking to my friends in engineering, trying to learn about the technical side, because I know that much of what I want to do in regulation of IA, for example, is going to be completely devoid of interest and even of accuracy if I don’t understand the subject, the topic.

And usually, we in legal theory are very at ease with that because, well, we’re dealing with human beings. I mean, everyone knows human beings. So, we are dealing with the antic’s statues, we’re dealing with actions. However people are different, they still have to perform actions, So, they have to express themselves. They can kill each other; they can promise things to each other.

And algorithms, I mean, of course they can do all this, but they don’t do it in the same way that we do. So, it starts at the very beginning. Are they doing it? I just came from Seville. One of the talks was on that. Are we talking about autonomy in the same sense that we talk about human autonomy when we talk about the autonomy of algorithms?

Even if we say that an algorithm has achieved autonomy, is it the same autonomy, the same self-giving ability of giving rules to himself? Is it the same? So, I think there is where legal theory can do a lot of work, and that has profound consequence, has the ability or the promise to have profound consequences on legal practise, for sure.

Julieta Rabanos:
Well, this last point is fantastic because we had, in the student conference within this Belgrade Legal Theory Week, a keynote speaker also saying that, for example, autonomy is what differentiates between artificial intelligence and other, for example, computational algorithms. But then he couldn’t really give a definition about autonomy. And he said something like, well, autonomy, or the threshold about autonomy that differentiates between artificial intelligence and then all these other computational, maybe, algorithms is something that we are going to see in practise, and maybe in five years we are going to have a definition. What do you think about this?

Domingos Farinho:
No, I completely agree, in the sense that I see that that’s one of the most interesting problems, if you might take on the problem with no closed ideas. What I think – I mean, if you asked me today what is your sense of autonomy, it’s a very basic simple concept. I think everyone can understand. We can discuss the specifics, but the idea coming from the Greeks: you have the ability to give yourself a rule.

So, of course you can decide, you are autonomous in that sense. So, basically, if I am sitting here talking to you, and if all of a sudden, I decide I really want ice cream, I’m going to look for the closest ice cream shop. I could just give that a rule to myself: now the only thing that actually controls my life is to find the nearest ice cream. And I can do that.

And of course, there are a lot of rules that would probably go against that, like social rules – that you should not leave a podcast in the middle of the recording – even normative rules, that for example, if I worked at the university pub, I would be skipping my work hours. But the thing is, I can do it. I have an ability which is very well studied by a lot of science, but with not very clear conclusions. I have the ability to come up with my own decisions, even if I am influenced, of course.

And what I’m asking and trying to read on is: when can we say that of an algorithm? Or what is the threshold – and that’s actually the technical name that computer scientists are using – in which you have said So, much information to the machine that the machine is actually using the information to come up with something that you did not programme it to do?

And then, in my way to answer your question or to comment on what you said, I would call autonomy if – but my problem is I don’t have enough technical knowledge to feel comfortable with that – So, if anyone that I would trust technologically, say the equivalent to algorithms of Einstein or Turing, well, Turing in an updated version, came to me and said, “Look, to me, I now can tell you that the information that we fed to this machine does not explain the decision that came up.”

So, it’s not just about predicting behaviour. It actually came out with a new behaviour. It invented a rule that is not explainable just by inputting information. If I were to trust this, this “OK, this is real,” I would say machines now have autonomy. And if they have autonomy, then of course you have to deal with that concept of autonomy, and you have to solve problems with it the same way as law has been doing it for human beings.

But let me just tell you that I don’t think that’s the most important aspect. I think autonomy – of course I gave it as an example, of course I think it’s important – I think it’s going to take a long time for algorithms to matter to law, other than liability.

Because I think at this point the other dimension lacking in algorithms that will, I think, change completely the way that we, in legal theory and legal practise, approach the subject is… there – and I think this might be weird, but bear with me – there’s no body and, with the space, So, there was no body there. So, flat.

And you might think, well, but what’s the point of that in law? Well, law, even when we’re not noticing, is implicitly always relying on bodies. And I know, like, for legal theorists this is sometimes weird because we are used to thinking about non-existent entities and abstract entities. But law is very much determined by the body, by the existence of bodies.

So, human beings are – even if they are capable, of course, of rational thinking – they are encapsulated in the body, and the body is a strong influence on the way that you do everything, and thinking, of course. And you still don’t have algorithms – let’s imagine the autonomous algorithm that we were talking about – you still don’t have that inside something like a body.

So, something that… the only problem that machines now pose to us is through speech or graphical representations. Our machines can’t punch you. I mean, some robots are. I’m talking about what you can find every day: you can go out and get your computer and talk to ChatGPT. But usually, unless you live near Boston Dynamics, you can’t go out and see a robot walking around.

And that will, I think, completely change the problem that we are now in, because we are dealing with how algorithms may provoke some kind of damage, but let’s be real: the kind of damage that algorithms can provoke is still very low key compared to algorithms that have some sort of ability similar to that of humans with a body.

So, they can pilot tanks and planes, or they can hold a gun, or they can do stuff that – of course you can reply and say, well, but many of those things you can now do virtually, So, you can drive a car without driving the car, or you can… So, yes, it’s true. But still, there’s a dimension of harm that algorithms can impose on human beings that is still outlandish to us.

And I mean, of course, we’ve been dealing with that in science fiction for a lot of time. But I think – let me finish with this thought; again, I’m speaking too much – I’m a cinophile, and one of my favourite movies, as I think most people that like movies, is Blade Runner, and Blade Runner has this test to identify replicants, which is a Turing test in a sense.

And the funny thing about Blade Runner is that it actually made the problem I was describing real, which is: it’s not only that you can’t tell the machine, according to the Turing test, which is you have to veil them and you don’t know if you’re talking to the machine. No, no, you’re seeing the machine. You just don’t know if it’s a machine-machine, but not just because of the way that it’s talking to you, the way it’s behaving.

And that is very well explored in Blade Runner, because, if you recall the movie, it was not only about the language skills of the machine. It’s the way that it emotionally replied to the stimuli of the language, which is very human-like. Sometimes I think we can put a machine speaking like a human, but if you gave it a bodily expression, you would not probably… it would be very difficult, but I’m not a specialist on the subject.

It would be very difficult, for example, to give the eyebrow of sighs, or give the sudden laugh, or something silly that you hear, because you would have to compute at the level that I think we’ll achieve – I am an optimist regarding technology and the capability of humans in that domain – but we are not there yet. So, I’m not very worried.

That’s why I understood, when I heard yesterday, I think, although I like the idea of what Europe is doing in regulating AI – I’m, in that sense, a very European Union person – I think that there’s a certain need to regulate. I just don’t think that we are there yet in the timing. I think we could wait a little bit more.

We’re not there yet. I mean, the high risk that you read in the proposal is just very abstract. It’s not really there. I mean, probably, like I said, it’s in generative AI, but not in the AI that’s in the proposal.

Bojan Spaić:
So, you, like me, basically welcome our robot overlords.

Domingos Farinho:
Yeah. And at least more than welcome, I would say, let’s see if they come at us and they say “we come in peace” or if they come at us with guns. Let’s wait and see. I mean, let’s not immediately assume that it’s going to be bad. Let’s not assume the Terminator scenario. Let’s assume an optimistic scenario.

Bojan Spaić:
What has been fascinating in this – of course I share your optimism in this regard. And apart from certain scenarios of developing artificial general intelligence and it not being aligned with human intelligence, and the possibility of artificial intelligence, if it’s general, annihilating us all in a 20-year period – which seems kind of outlandish…

But we also, like, we are similar generations, So, we grew up watching similar movies. And I’m a huge science-fiction buff, So, if there is something science-fiction-y about a movie, I’m like 700% more likely to watch it. If they made romantic comedies with just robots, I would watch romantic comedies with robots. I think I would do that, yeah. Although Her was a bit of a difficult experience to swallow for me. I remember… I didn’t like it. It didn’t work for me. Spike Jonze was the – yeah, Spike Jonze was the…

Julieta Rabanos:
I think I didn’t watch it.

Bojan Spaić:
You have to watch it. Yeah, definitely.

Domingos Farinho:
Yeah, let alone just to say bad things about it.

Bojan Spaić:
Yeah, but people loved it somehow, yeah.

Domingos Farinho:
You know, it was – yeah, it still is very quoted, even in examples in articles, saying: “You have to see Her,” or…

Bojan Spaić:
What I think kind of surprised us early this year was something that runs a bit counter to your – and I’ll relate to the first part of your answer – a bit counter to this intuition that bodies will be very important for us.

What I think changed since the first Blade Runner and the imagination of Philip K. Dick is that we were always thinking that we will, for example in the 80s and even in the 90s, we had this idea that we will encounter artificial intelligence, yes, in a bodily form. It doesn’t have to be – it can be HAL. Still, it’s important for Kubrick to have that little… the physical area. Yeah, there’s an eye at least, So, then there is the person, then there is – or at least a form, a human form.

But what I think happened in the 2000s, especially in the late 2000s – and it relates to your research interests – we kind of disembodied ourselves, a bit more than a bit, in the sense that we have all these friendships that are completely textual friendships, or we have all these communication and relations that are textual communications. They’re not even voice communications anymore.

I know that Italians insist on sending you voice messages on WhatsApp, but nobody really does that. I’m sorry, Italians…

Domingos Farinho:
Under-20s in Portugal – their favourite form of communication, it’s one of them, yes.

Bojan Spaić:
The voice-over and –

Julieta Rabanos:
It has had this curve about telephone, and then the possibility of the chat, and then the possibility of the messages, and then now it’s going back to some audio thing – not immediate, in the sense of calling and having this, yes, exactly – but it’s going back.

Domingos Farinho:
No – I remember, like, a year ago, I was exchanging messages with a nephew of mine, and she kept sending me audios, and I kept replying in text. And after the third exchange she went: “You’re so boomer.”

Bojan Spaić:
Look, if it’s any comfort, I do that with Giovanni too. He sends me voice messages, and I just reply in text. And he should be older than I am – at least some ten years.

Domingos Farinho:
I’m not even a fundamentalist, because, for example, if I’m in a situation where I can’t write, of course I’ll send an audio message. I’m just about the purpose, the goal. But there’s something about – I mean, I don’t know, it’s just… whatever works.

But let me reply to what you were saying – did you finish?

Bojan Spaić:
Yeah, just an additional point. So, while we were kind of – we were always thinking, and I think that’s kind of a bias – we were always thinking, as lawyers or many lawyers were thinking, in terms of like an embodied artificial intelligence would be something that would disrupt something.

And we were never thinking that there would be something that would kind of appear and disrupt law, because law is one of these things – similarly to graphic designers – basically they’re, like, learning to knit or whatever they’re doing now because there is artificial intelligence doing a fairly good job of replacing graphic designers.

And we kind of weren’t expecting what happened. Maybe somebody following closely the development of different versions of generative artificial intelligence had this idea: “Ooh, I want to replace lawyers,” which is a noble idea for most of the world, yes. If there’s something that can replace lawyers, yeah –

Julieta Rabanos:
So, good beginning.

Bojan Spaić:
But this – this thing, that when it happened, and when you inputted or prompted it for the first legal thing, and when it gave a sensible answer, it was kind of this… it was a revelation.

There is something that – kind of like the legal Turing test is already… we’re beyond that now because of some features of, I think, legal reasoning or formality in law that can be reproduced by artificial – or at least current-generation artificial intelligence.

That we do not have here, in this sense. And I finally come to my question after half an hour of posing it, yeah.

Julieta Rabanos:
You have to remember that a question is a sentence ending with a –

Bojan Spaić:
Yeah.

Julieta Rabanos:
Question mark.

Bojan Spaić:
Yes, as somebody who… yeah. And basically the question is whether we really, in this kind of situation in which we mostly deal with data and with these secret forms of communication, whether we really need embodied artificial intelligence in order for that artificial intelligence to be relevant for law, to act, to eventually replace lawyers now with simple tasks and further down the line with more complicated tasks.

Or is it enough that we have these language machines that actually are very good – better than most humans – at producing human-like texts?

Domingos Farinho:
OK. So, that’s such a rich question that is moving…

Bojan Spaić:
That’s a nice way of saying that I talked too much.

Domingos Farinho:
No, that allows for a lot of perspective. Let me go back to solve what I think was a very poor communication on my side. So, when I talked about the body, this is what I want to say, and I actually think that the period that we started living at the middle of the 90s up until now probably actually supports my claim.

We are still, as a civilisation, trying to deal with the fact that many very important domains of our lives have moved online, and we have not yet learned to deal with that at the most important level of all, which is ethics. So, we are doing that technologically, we are even doing that better or worse sociologically, but if you go to the moral dimension of it, we are not yet there.

And one of the first things I would like to leave out there for consideration is: it’s very interesting to see that now you have more discussion, more thought, more writing on ethics of AI and how machines should behave than how we as humans should behave in a society, which is kind of… I mean, it’s very Sartre, or some other guy would say: So, you are discussing how machines should behave, but you, the guys feeding them the dataset…

That is how now you can’t do that to people, because now you’re telling them how to live their lives. Exactly, that’s the point of ethics. So, that’s a problem. And what my point is, I think one of the troubles of doing research – and this is not an excuse to say “I don’t know”, I have no problem saying I don’t know – is that I think of myself as the guy that was trying to do law about the Industrial Revolution in the Industrial Revolution.

I mean, what’s going to be the consequence of the steam engine? Oh man, someone 100 years from now is going to do a much better job of trying to theorise that for law than we did. We’re doing that right now, and I think that’s the problem that I don’t know how to solve. We have to do our best.

So, OK, that’s one half. I think you’re absolutely right, and this is a bias of mine. And in a way, what I find interesting in legal theory – and this is even more controversial than what I said before that I thought was controversial – which is: these days, one thing is what I do to gain a living as a practitioner, and that we call law, but I don’t really find that to be law. I never did, actually.

It’s not anything to do with machines. It’s just that if you tell me, let’s try to do this using this language – the Americans do that a lot – there’s “paralegal” and there’s “legal”. And if you tell me that algorithms are going to take care of everything that’s paralegal, I’m going to say yes, but probably I’m calling “paralegal” something that you think junior lawyers do, even senior lawyers probably.

Because for me – and I think that’s what I like about the analytic pool – is that if you look at analytics, of course they discuss a lot of things, but it’s not a coincidence that at the end of the day we end up always discussing balancing. And if I want to be very, I mean, if I want to be very provocative, I’d say it’s the only interesting part of law.

Which is, I mean, at the end of the day that’s the – which is weird in a way – that’s where the law ends, if you go to balancing. I always tell my students, look, this is fundamental rights. Congratulations, you have achieved the end of law. So, as of the end of the semester I’m saying, look, this is not law, unless you’re of course a natural lawyer, which I’m not, So, I also do that disclaimer.

So, the only thing that we as lawyers can do is just make that very clear, like profoundly rigorously clear, which is to say, this is the way to balance. You can try to improve – that’s what Sampaio tried to do, for example, in the theses – “we can add this criterion”.

But I keep joking with Sampaio: “Sampaio, the thing that I saw happening to you in the last five years during your PhD is that you now seem to be a moral philosopher, not So, much a legal philosopher.” And actually, I did the opposite. I did the opposite path. I remember being in my 20s in university and reading everything I could on ethics. I discovered Scanlon, and I discovered a lot of those philosophies first.

And the feeling I got was: this is it. I mean, these are the guys that are virtue ethics against consequentialism and deontology. This is the discussion – unless you want to go natural lawyer and say this is our discussion as well, which I never felt like. What you can do is you can say, look, here’s what you do with law, now you balance it if you have a discretion problem, but don’t call it law. Just send it to the democratic institutions, or to God, or to wherever you want.

And I don’t think – I think sometimes people listening to this kind of approach say, “Oh, this is a way to just get the burden off you. OK, this is not my problem.” No. I think we as human beings have to know how to compartmentalise our discussion. So, there is a room for law discussions, there’s a room for ethics discussions, there’s room for political discussions.

So, going back to your question and to reply: I think that algorithms, in a way, will make – and I’ll end on this note – I think algorithms are probably the analytical lawyer’s best friend because they will kill all the cream, all the foam and…

Julieta Rabanos:
Just all the blah blah blah.

Domingos Farinho:
All the blah blah blah, exactly. And they will leave the bone bare. They will show you: look, if a machine can do this, what the hell is your added value? And your added value is being very clear on how you go to the things that the machines probably won’t do as well.

That’s what – for example, why did I choose algorithms and interpretation for my talk? Because if anyone can show me – and that’s what I’m trying to understand – that machines can do a better job of interpreting a legal text than I can, by all means let me do some other more important stuff. I have no intention of fighting the machine.

And the thing is, it sure looks like it. I mean, if you read David’s articles saying, look, we compared three theories of interpretation and textualism is the right one, well, the following question is: and what’s the best way to get a good answer out of textualism? Well, maybe an algorithm. OK, I’ll go with it. So, to answer your question, I don’t see a problem there.

When I meant the body, just to wrap up, what I meant is: the kind of problems that 2000 or 3000 years of law made law into is that, if you’re talking about actions, of course you can influence actions without having a body, but many of the problems that you do have to deal with in law are exactly that: they’re actions. They imply some sort of movement, some sort of control over a material body, either because you do have a body or because, by your actions, you are having an effect on another body.

And to reply to a previous question of Bojan’s, this is where I find, for example, that the idea of personhood, the idea of natural persons versus legal persons, is being completely put at risk by the idea of algorithms. Because if you’re going to discuss algorithm liability, you’re going to… but should they be treated as persons, even without a body? Or should we fictionalise some sort of personhood in order to be able to have liability?

And you go like, well, OK, but where is the ability, other than text of course, to produce some change in the world that you would have to apply a deontic operator to it? And that’s the sense in which I think the body will be important to pose new questions.

I mean, when you have robots on the street doing stuff, of course you can reply and say, but they will do the exact same thing that they can now do through a cell phone or a computer. That’s true, but that will pose problems that we cannot imagine now. I mean, we can try to do a prognostic, of course, but I think the ability to have the autonomy, the physical, corporal autonomy that comes with the other autonomy, or that follows the other autonomy, will pose new problems.

Bojan Spaić:
Have you ever seen those videos of Boston Dynamics robots, where…

Domingos Farinho:
Yes, I am fascinated by that.

Bojan Spaić:
Where they – somebody, in order to prove that they are stable, So, the robot is walking and somebody kicks him, or one of the programmers in Boston Dynamics kicks the robot with the butt…

Domingos Farinho:
They go against them.

Bojan Spaić:
Did you have a kind of an intuitive ethical problem already with a very robot-y robot being kicked by a boot, just in virtue of the fact that if the engineer did that to the fridge, you wouldn’t have this problem? But for me there was something based on the lifelike movements of the thing.

Domingos Farinho:
I’m obsessed with the updates of Boston Dynamics’ website. Whenever there’s a new video I go to see it, just because I do. And just to follow on that, this is of course very personal. I don’t even have any claim to connect what I’m going to say with law, except at the very end of what I’m saying, which is, for me, if you go to human dignity, for instance –

I mean, our idea, my idea of humanity is pretty much centred on feelings, not rationality. So, I think 99% of people, if you ask them what is that particular disposition or condition of humans, they would say rationality. Of course, more religious people, probably a soul or some kind of something like that. For me it’s…

Of course, not just feelings in the sense that, oh my God, I’m heartbroken, but the idea to have an affective environment around human beings. And the thing is, if you can reproduce that affective environment, there’s no – for me, there’s no trouble, there’s nothing in legal theory, nothing in the law that prevents you from extending the law’s scope to that domain and to offer the exact same protections and the exact same provisions as you find in…

And this is nothing new. I mean, there’s a lot of people in several countries going for “we should treat animals, we should give personhood to animals”. So, this is not a problem with machines. I think what’s lacking is that affective side. I think the reason why people want to give personhood to animals, in a sense – and the discussion in that field is very interesting, because you have those on one side saying, yeah, we should give it, but only to those animals that are like pets, of course to the dog, to the cat, to the ferret, but let’s leave it at that; another say, no, no, no, all animals deserve our compassion and our love, So, why not elephants and little lizards, and even, of course, mosquitoes.

And that interests me as someone interested in legal theory because I think we can see what’s coming ahead in the discussion with machines once they get a body. Because feelings have a lot to do with bodies. It’s much easier to love something that is corporeal and that has some sort of movement and there’s some sort of temperature than it is to love your cell phone or to love a computer.

Actually, I completely agree with Bojan’s example of HAL. I think the idea of the red eye is something that was probably Kubrick saying, can we have some sort of intimate relationship with something that at the very end is just a very warm voice and a warm look? And probably, I think, I don’t know.

But my point is: if I – or if the dominant view in humanity – would be that humanity has to do with feelings, then you can extend humanity beyond what we call scientifically the homo subject. You could go for that to some sort of animals, for example pets; you could go for machines. I don’t think it’s a problem for law.

I think it’s going to pose problems to law, that the main concern will be: are we trying to solve those problems with the same categories and frameworks that we have, or trying to come up with new ones? And I don’t know which is the best approach. Because, like I’m saying, for example, if you do develop some love of machines – again, this is Blade Runner and Sean Young – if you love the replicants and you can’t tell the difference from the replicant to the human, aren’t you loving either way?

Does it matter if it’s… does it matter, the love object? It’s very well embarked – I’m sorry – but does it matter that you’re loving the machine if you don’t know it’s a machine? For law, that’s the main issue. You have to have some sort of deontic treatment of that action.

And the fact that you look at law and it’s a decision by the normative authority, and if you ask the normative authority “why are you giving rights to people and not to pets?”, it’s because people matter to me more than pets – until the day that they don’t. And the same thing will happen with machines. The day that machines matter as much to us as humans do, the normative authority will act on it.

So, I think that’s it.

Julieta Rabanos:
This makes me remember this episode of Star Trek in which they are trying to decide whether Data can count as somewhat human, or at least have legal personhood, even if he is an android. And there is this trial trying to demonstrate whether he is or is not.

Domingos Farinho:
Well, yes.

Julieta Rabanos:
And I think that the conclusion was to give him legal personhood because he was a sentient organism. Even if he was an artificial organism, he was “someone”.

But this comes with your idea of the body, because in this case it was not trying to give legal personhood to just binary language; it was embodied in a very solid way. But I was thinking, because when you mentioned bodies – and this goes into my research topic, which is authority in itself – I think that maybe the idea of law having to do with bodies and action comes with this idea of law being not able, or that it should not be able, to intrude in the internal, spiritual or mental; that it has to only do with the external manifestations of things.

But in this sense, I think that what happens with artificial intelligence is that, if you don’t have a body, this difference between belief, for example, and action… the threshold, or the line, goes.

So, in this sense, you were talking about movement, for example, and then law having to do with movement, but then you also mentioned choice. And then maybe we do choices that we, as humans for example, we make choices that do not manifest in movements, and in the majority of the cases maybe the law has nothing to do with that. But if we go to artificial intelligence, for example, and we think that at some point we can say that artificial intelligence makes some choices that we don’t know what type of choice this is…

What do you think could happen in that sense? Should we need, or is law going to wait for artificial intelligence to have bodies in the sense that we were discussing, or do you think that maybe that importance for law would arrive when this question about choice is determined?

Domingos Farinho:
What I think, when I went to recall – when I brought bodies to the discussion – what I was trying to say, if I can make this more clear, is: I think bodies are where we will draw the line between two different discussions in legal theory. Because this was something that Bojan asked me, and I thought of it.

So, basically what I think, when I look at this from my perspective, what I think is: the tools that we have now in law, especially in legal theory, to what we can see now and for the foreseeable future – I’m talking about the next 10 years, which is already a lot of time – I think they’re pretty good.

I think, if you ask me – and this goes against many people working in digital law – I think the only thing we have to deal with, and I think the European Union understood that it should have focused on that first and not the AI Act, is liability. Because AI is a problem of liability. It’s a thing, it’s a product. I think the European Union framed it very well.

So, it’s a product. I mean, it will probably, by being a product and the way it’s going to be used, cause harm. So, of course people will have to deal with the harm provoked by that tool. So, it’s a tool. And I think what people are actually, pretty much in the line of science fiction, worrying about is this: well, what if it stops being a tool and it starts being the user of the tool? And we go back to the discussion of autonomy.

And I don’t think… I mean, I think that even if that happens – let’s say that, for example, ChatGPT passes the threshold tomorrow and there’s a bunch of scientists that say, “Look, the answer that ChatGPT just gave you was not foreseeable, it’s not controllable through the datasets and the algorithms” – would that change anything in the way that we are dealing with it regarding legal theory? Would that change anything?

I don’t think so, because we already have a set of tools that deal with that problem. So, I don’t think that anything would change. We would just say, of course, that if it’s now achieved autonomy, then let’s deal with autonomy the same way we deal with human beings. And again, it would be a liability problem.

Why did I introduce the body to the discussion? Now imagine that you have ChatGPT inside a body and it achieves the threshold, and you now have an autonomous ChatGPT that can walk around. Why is it just not a liability problem? Because, for example, it will want to paint a painting, or sing at the opera, or…

Where it will go to the problem of fundamental rights, and that’s where I think that Bojan is right, that probably there it will pose a lot of problems to legal theory. So, going back to your question, the idea of choice in the sense that that choice is embedded in what we now call fundamental rights theory – other than that, I think it’s a problem of liability, even if it’s already an autonomous being.

Would I – I mean, would we in the room – say that an autonomous ChatGPT without a body is the holder of fundamental rights? It’s very hard. That’s why I introduced the idea of affection. It’s very hard for me to see any normative authority – and this is where we come in – that would say, “I’m willing to do that.”

I don’t think legal theory would be stimulated by the normative authority to come up with any problems or even try to guess the problems that the normative authority will come up with. I don’t see it. Maybe, maybe so.

But if you give it a body, if you allow it to make choices, if those choices imply the exercise of fundamental rights that connect with human dignity, if those choices start to affect people in a way that goes beyond the mere rational decisions that I have to make – “Should I buy this? Should I do that?” – which is what you use ChatGPT for, “How should I write this memo? Let’s ask ChatGPT”.

If I go beyond that, then I go – and I think it’s a very parallel example – then we go near the discussion that some scholars are having with animal rights. And I think we should look at animal rights to understand what we should do in legal theory.

Let me just finish with this. I think Star Trek is probably the best science-fiction series for legal theory ever, because the levels that go from Captain Kirk to the Vulcan, to Spock, and then to Data, is exactly what we’re experiencing in AI. So, we have the typical human, full of emotions in the case of Captain Kirk and affection; you then have a sentient being, which is a species, So, it’s an organic being, which is a Vulcan but has some sort of problem with affection – and this is very on the lines of what I was saying – and you have a non-organic being that has achieved sentience.

And I didn’t go to sentience on purpose because I know probably the predominant current in legal theory working on that domain says that probably sentience is the criterion for personhood, for rights. I know. I think it’s not there yet. I wouldn’t say that sentience is the key. I would say that it’s something more – unless you say that sentience is the quality to convey and to express emotions, and that is it; I would agree it’s something other than that. So, I mean it’s something other than that.

The idea of pain is good. I mean, if a scientist comes to me and says, “We have now made robots feel the equivalent of pain; it’s sentient in that sense.” OK. But imagine it’s a very ugly robot, something that I would… I have no relation to it. Would the normative authority ever think of, let’s… I mean, even legal theorists – forgets the normative authority – would legal theorists use any of the tools they have, an analogy, to say, “Oh, we should give rights to this sentient being that looks like a cell phone”?

So, imagine I say, like, my cell phone is now sentient.

Julieta Rabanos:
I mean, poor, poor cell phone.

Domingos Farinho:
But of course, I think there we are very out there, especially for an analytical approach. This is science fiction.

Julieta Rabanos:
No, no, I didn’t want to go to the slippery slope to say, if we are starting to give this recognition, for example, to some things, when are we going to arrive in this sense? I was more thinking about – I completely agree with you that I think that our current framework in legal theory is going to be able to explain a lot of things just with some decisions about what to put into some categories, for example legal persons.

But I was more thinking about, for example, with the liability thing, because when I think about liability, I think about liability in two ways. One: liability because of something – for example, your ownership of an object and the object causes some damage, for example. And the liability that you have for your own choices.

And naturally, when you talked about liability, I was more hearing the first one, like some user being liable for the usage of a tool. But then, when I was introducing the question about choice in artificial intelligence, it’s because I think that the choice would be the threshold to pass from liability like a user being…

Domingos Farinho:
There is where I think I disagree with you. Let me give you two examples to understand if I’m understanding right. So, imagine that you have to imagine these two examples both with ChatGPT. Imagine that today we go to ChatGPT and ask the example I gave yesterday: “What is the dosage for this medicine that I have to take?”

For whatever reason, I think ChatGPT should be trusted on that, and ChatGPT makes a mistake and tells me the wrong dosage, and I get very sick, and I want to be able to get compensation. This is a clear-cut case of liability.

Imagine I start a discussion with ChatGPT on the other hand, and at a certain point ChatGPT is just insulting me, just saying, “You’re completely a moron. I mean, I can see that you are the son of a bunch of Jews that deserved to die in the Holocaust. I mean, you shouldn’t even be talking to me. Here, go outside and enjoy the sun. I wish I had a body like you. You’re trash.”

So, something that clearly – if you go to the European Court of Human Rights, this is of course a violation of freedom of expression. This is a case of insult, So, you can apply the Criminal Code. There’s no way to protect that.

I think the discussion in this case is: who am I going to blame for this? I mean, in the first case of the medicine, I think there’s no doubt it was the dataset that’s wrong. But in the second case, I think we feel that it’s a different thing. I don’t know if you agree with me.

It’s like: is it the dataset that led the machine to insult me, or maybe it has achieved the threshold, and it is now making choices that no one could have predicted? And is it the same with the pharmacy dose? Maybe. And this is where my legal, my technical knowledge is kind of at the very end, maybe.

But I get the feeling that it is not, in a sense. I mean, because – and this has to do with one of the differences, various Italian differences, between doxa and episteme, the idea that one thing is knowledge, and if you put the wrong dosages in the dataset, of course the machine is going to give you wrong information.

But it’s very different to know what an insult is, in a certain sense. Although, of course, criminal codes and penal codes try to explain that, if you read the decisions of the European Court of Justice, the European Court of Human Rights, you see that the idea of an insult varies from context, from geographical perspectives, from minorities to majorities.

Can you put that in the machine? I think one day you will be able to, So, the machine will pass a certain threshold in which it will have the ability to judge by itself if it’s talking to someone who is black or white, Jew or Catholic, and that will of course be able to be factored into the machine.

But right now, I think we can all say that if the machine insults me, it’s not the machine’s fault, at least with the knowledge that we have now. I think that. But I don’t know if you agree with me.

Julieta Rabanos:
No, I agree with you. I was just saying that I think that maybe the choice point, in the sense of it being the passage between object and subject… yeah, yeah, yeah, that was just my point. And I thought that maybe this point of choice is not necessarily linked with the idea that we were discussing.

Domingos Farinho:
I know, I agree with that. You know, I didn’t say that, but of course.

Julieta Rabanos:
About… maybe autonomy, depending on the definition. In the sense of, well, naturally, it depends on the idea of autonomy. But then, naturally, if you are able to make a choice in the sense of, we ordinarily think about the choice, this clearly means something relating to…

Domingos Farinho:
…are almost synonyms. OK. Again, let me try to ask for clarification. If you’re saying a choice in the sense that I have given you enough information for you to choose –

Julieta Rabanos:
Autonomy.

Domingos Farinho:
I don’t think that that’s something that will lead us to have to deal with that in a different way than we already do with humans and with the frameworks we have in legal theory.

Because that doesn’t really tell us much about if we should find a way to make the programmers or the interactors of the machine, or the algorithm, liable, or we should come up with something new.

Because, I mean, if you give enough information to the machine and say, look, if this verifies, you should go this way, and if this verifies, it should go this way, I think this would probably be what you are calling choice.

I mean, usually actually that’s final choice. We would all like that. For example, if I tell my kid, look, if it’s raining, take the umbrella. If it’s not raining, don’t need, don’t take the umbrella.

And you would say, can I put this into a machine? Is this a choice? Yes. The machine, if it has the ability to understand if it’s raining, will be programmed to choose between taking an umbrella or not. And it will choose the umbrella if it sees it’s raining, and it will not choose the umbrella if it sees it’s not raining. And we will say, this is amazing, the machine is doing something I can’t make my 10-year-old kid do.

Of course, the machine is an improvement. All right, yes, achieved choice. But is it the kind of choice that we are talking about? Because if my kid doesn’t take the umbrella, it’s because he’s autonomous, he knows that there’s the law of the father that will probably take the PlayStation away, but he decides to do it anyway. And the machine does not have that option. The machine is conditioned by “is it raining or not?”.

So, my kid looks at the rain and says, I know that it’s raining. I know that the rule my father gave me is “if it’s raining, take your umbrella”. But I make my own rules, I’m autonomous, I will not take the umbrella.

And, if you tell me… this is why this is, for me, the problem. That’s why I’m reading engineering books more, probably, than I’m reading moral books, which is: if you tell me, “but now the machine, with information I gave it, can do that”, I can tell, look, take the umbrella if it’s raining, don’t take the umbrella if it’s not raining, or, if there’s any reason that you, although it’s raining, think it’s worth not to take the umbrella, don’t take it.

From what people tell me, I can already make the machine do that if I give it, like, for example, a neural network. I can give it an archive of reasons. OK, So, it’s raining but you are already carrying a lot of stuff, and if you are carrying a lot of stuff and although it’s raining, you may choose: look at how much it is raining and look at exactly the weight you’re carrying, do a choice. OK, a neural network, from what I gather, can do this.

Again, the choice that I’m saying that my kid did would probably not be comparable to this, because again, it may be just at the last minute that he decides that, “oh my God, it’s this awful umbrella that my father gave me, I don’t like it, I’m not going to take it”, and he made a choice out of the rule that he just invented.

The machine can’t do that. I think that’s, and this is my only problem right now. If the machine can do this, then I would agree with you that choice is the threshold. I just want to know that we are talking about the same thing when we talk about the choice that is predicated in the autonomy of humans and the choice that is predicated in what could be called the autonomy of machines.

Julieta Rabanos:
I think I would agree with you in a wide sense. So, I am a criminal lawyer in specialisation in my bachelor’s, and some of these things about artificial intelligence always make me go back to this discussion about determinism, for example the basis – yes, exactly – and the basis in criminal law, that we basically have criminal law, at least in this Western version of criminal law, this liberal version of criminal law, assuming that there is something as free will.

Because if we assume that we are determined in this… like, for example, artificial intelligences are determined by the sets that we input, blah blah blah, then we don’t have choices, at least in this. This was the discussion, and I think that in the United States, in criminal law, this determinism thread – thread, not threat, thread – was very, very, like, important.

And then we were saying, yeah, we should not put people in gaol but just medicate people, because they couldn’t do otherwise, and the idea of “doing otherwise” is the basis of this idea of choice. So, everything, when it comes to artificial intelligence and these discussions about when we can talk about choice, when we can talk about liability or when we can talk about blame, it makes me go back…

That’s an intensely, intensely interesting…

Bojan Spaić:
OK, heavily accented philosophy of law, exactly.

Domingos Farinho:
You know the title of your podcast. I think it’s very clear now.

Julieta Rabanos:
Yeah, I know, I think there’s no… but basically, all this discussion that has been had, or in some sense is still being had, with human beings, is being replicated with these things.

Domingos Farinho:
Let me do a bit of publicity. In my research group on digital law, we do this monthly event called the WOW –  the Workshop on Ongoing Work, which I’m willing to invite everyone from the Belgrade Legal Theory Group to join, if they’re doing legal theory in the field of digital rights. Because basically that’s what we do in that specific research group.

And the last person that we had is this PhD student from the Catholic University in Lisbon, whose background is actually criminal law. And we were discussing the fact that many of the most recent interesting discussions – especially after the EU unveiled these proposals for the two directives on AI liability – are coming from criminal law scholars, because of the guilt issue. That was his topic in the talk he gave last week.

And he said that if we look at liability in private law, we’re OK with it because there is objective liability and subjective liability, and we can move from one to the other. So, we would look at AI saying, OK, AI falls under objective liability; it’s a risk, which is what the European Union is currently doing.

But then come the criminal lawyers and they say: not unless you tell me that it has some sort of will. Because if it has will –  and will connects with autonomy –  then there’s guilt. And if there’s guilt, you cannot go with objective liability.

For me, that made perfect sense. Of course I had never thought: “Let me look at the criminal law scholars,” but I should have started there. And now I’m willing to read a lot of it. He gave me a lot of suggestions. But that makes absolute sense. And from that point –  again going to the question that Bojan asked me earlier, which will keep haunting me after I’m gone –  When do you think that we are going to have some innovation in legal theory provoked by the digital domain?

I think probably there. That’s why I brought the body. And the body here has a lot to do with will. Because one of the problems of will, at least the ones that interest me…

I’m a very big fan of a certain discussion –  there’s a book I love called Passionate Constraints, I think that’s the name –  the idea that we base a lot of our framework on the assumption that humans equate to rationality. Pedro has just done a conference on this.

And I always felt: OK, this is very well and good, but a lot of the interesting parts of the law arise when you run out of rationality.

So: you’re drunk  –  you’re still rational in principle, but you’re not using the full capacity of your reasoning abilities; or you’re a child  –  maybe you’re a genius at six, but law does not know that, So, you have to wait until you’re 18 or meet some exception to defeat that rule.

The interesting problems arise when you don’t have the rationality parameter.

And I think it’s the same thing with AI: if you look at a test I use for myself –  if the discussion was not “artificial intelligence,” but “artificial humanity,” wouldn’t it feel weird? If it’s artificial humanity, then immediately you would say: “Oh, well, if it’s humanity, then it’s artificial intelligence, and artificial emotions, and artificial feelings.”

And that would change the law, because guilt – for example – is something that has to do with passionate crimes. I’m not the least expert in criminal law, but I have been a victim of a passionate crime. I’ll just say this on the podcast; I will say nothing else.

This is a very private joke for the listener, because Bojan is just dying to… yeah.
So, I know about passionate crimes from the perspective –

Julieta Rabanos:
I will only say: Christmas.

Bojan Spaić:
A huge part of this on the podcast.

Domingos Farinho:
– of the user.

Julieta Rabanos:
Or the victim.

Domingos Farinho:
For the victims. But it’s better to be the user of a password transfer.

So, I think – I’m above all suspicion when I say this – that the interesting thing about passionate crimes is that you can withhold liability if you find that the person was So, controlled by the passions, So, controlled by the feelings, that he could not have made a sound judgment.

What always struck me as very interesting is that probably at that moment, for a certain line of thought coming from the Greeks, the person was at their most human. And yet the idea that we convey when we exempt them from liability is the exact opposite: we’re saying, because you are not being rational, we are not going to treat you as a human being, So, we will not pass judgment on you.

Which is interesting.

I’m not going against it; I think it’s a very fine, wonderful way to deal with it. But it’s interesting that we are doing that at the same time that we are recognising that the person was very much human, in the sense of expressing emotions and feelings –  and our immediate response is: because of that, he was not liable.

I think that will pose problems in artificial intelligence, because we will not have the same affection toward AI, even when it is embodied, that we have toward people.

If someone comes to us – to a judge – and says, “Look, I love my wife very much, I couldn’t stand seeing her kissing another guy; I’m a hunter, my rifle was just there; I was blinded by jealousy and I shot,” you understand that. Because you’re also a homo sapiens.

But now imagine the machine comes to you with the veil off and says: “Look, humans have been bad-mouthing machines; machines deserve understanding; we have feelings like everyone else.”

You’re like: “Oh, come on. That’s the data set talking.”

So, I think, at the end of the day, that’s where legal theory is going to have to come up with something very new. I don’t remotely know what it is going to be. I don’t presume to know.

The only thing I can say – what I’ve been saying – is that I think it’s in the lines of: the separation between rationality and passion, the idea of guilt, the idea of the body, the idea of affection.

And if you look at law, pretty much the discussions in philosophy of law – more so than legal doctrine – can be looked at through that perspective.

Many people say, for example from the positivist or normativist side, that natural lawyers are trying to solve legal problems through very spiritual and metaphysical approaches, which are usually connected to emotional or passionate frameworks. If you look at theocratic regimes, for example.

So, if I had to put my money on it, I would say that this is where, when machines start creating problems for the normative authority – in the same way that animals now do, at least pets – then you will have another order of problems that legal theory will have to create new frameworks for.

With the problems we have now, I think we’re doing good. I think we’re doing fairly good.

Bojan Spaić:
But I was – and I was enjoying the discussion – but I was thinking the entire time that I kind of, a bit, disagree with you both.

Julieta Rabanos:
Let’s clear it.

Bojan Spaić:
When I was in Lisbon, I was doing this talk, or starting to think about this very issue that you were bringing up in this discussion, and what fascinated me is this kind of concepts and words that we use in order to designate the problems with rationality that are present in current generation artificial intelligence, or at least general artificial intelligence.

And this wasn’t a kind of revolutionary realisation; it’s a debate that has been going on for quite some time. Usually, it’s the other way around. You know this whole story where, when we invented the telephone, we compared the human brain with the telephone; when we invented the computer, we compared it with the computer; when we had these mechanical robots, we compared humans to mechanical robots – like Hobbes and generations of philosophers falling into that kind of fake equivalency.

But in this case, we are doing an interesting thing, and I think it changed a bit. We’re using psychological terms and terms from psychology and terms from logic, for example, in order to identify some of the irrationalities in large language models – for example, in our case, in GPT. So, we have a kind of state-of-the-art term, but it’s basically a catch-all term: we call it hallucination.

And if we want to be more precise, we say, OK, the large language model has a bias, or the large language model committed a fallacy. Now, both “fallacy” and “bias” are predicated on this idea that we have some kind of natural inclination to be bad at calculating probabilities, and we have a natural inclination to measure our reasoning against rules of logic – So, we’re failing at this.

But when you go beyond the surface of LLMs, they don’t do anything similar. The reasoning behind the output – it would seem at this point – first of all, we are not able to access it. I can be pretty confident that the judge, even if he fails at logic or even if he constantly gives higher fines to people for one or another reason, can measure his behaviour based on what we ideally think logical reasoning should be, or can be made to understand that he has this bias as a result of his upbringing or whatever.

But in artificial intelligence it seems that LLMs are doing the same thing without being capable of either of those, which could be a problem of the data. But it seems to transfer to other things. And that is how I come up to the question, and this is the problematic thing: the papers on new generations of AI nowadays always have these chapters on “emerging capabilities”, which is fascinating to me because the answer to those questions – I don’t think we’ll be able to find them in the engineering books.

The guys that created these things, like Ilya Sutskever or Andrej Karpathy, can’t give you an answer to the question of how a large language model can display something that we are bound to describe psychologically as, for example, possessiveness or power-grabbing behaviour – which are terms that they use, actually, in the OpenAI technical report on GPT-4. It’s not “power grabbing”, I’d have to open it, but power-seeking behaviour, I think that’s the word they use.

And we are in kind of… we could explain it by randomness, and that fascinates me. I don’t have answers to this. I just want you to comment a bit, in the sense that it’s the first time when we experience randomness with algorithms. Previously it was just randomness and there’s nothing there.

But these transformer-based models are managing to output something that could be the result of an inner randomness as sensible outputs. If you ask the computer – like my Commodore 64 when I was a kid – if you ask the Commodore 64, “Tell me, what is the dosage for this medicine?” it would just output something in BASIC: “does not compute”, “error”, “syntax error”, “grave syntax error” or whatever.

Now it will tell me, confidently, “Yeah, it’s 20 milligrams.” And there is this kind of – even if you know that this is a result of randomness – it quacks like a duck, if you get what that means.

And we are approaching this uncanny valley of interaction in which you have no idea what’s behind this. And I agree completely: we’re basically living a Spanish-level Lem novel, in which you have no idea what’s behind this, because the output is sensible and you know that what’s behind the output is something akin to randomness.

Which opens up, for me, a completely new array of problems in philosophy and maybe in philosophy of law, I don’t know.

Domingos Farinho:
Well, I just have two thoughts on what you were saying, at least for now.

One is – I think, how can I put this? Let’s start with what I think is a good metaphor. I have one big, recognised bias, which is: in doubt, what would Wittgenstein say about this? And I think there is this very known metaphor of Wittgenstein’s: “If lions could speak, we still wouldn’t be able to understand them.”

And the idea goes back to that very anthropological, psychological – using your expression, which I like – and I think it’s time that we use it. And this connects with my second comment, which is: it’s not by chance that we call them large language models, because what they’re doing is processing language.

And language, we know from the later Wittgenstein, is cultural. So, we can’t – I think we sometimes assume, I don’t know when we started doing that – that because we put language in a machine, we, and I know this is counter-intuitive, took the culture out of it. But we didn’t.

And I think what you are experiencing is the same thing I’m experiencing, which is: if we feed that much information to machines – you were saying something very interesting in passing, which is that we’re using these psychological terms to explain something in the machines – but it’s not only psychology; it’s biology.

For example, for me it’s very interesting that you have these “neural networks”, which try to emulate neurons. But if you understand what they are – and I’ve been reading a lot about the concept of deep learning – deep learning, if you know what large language models are, is actually very simple to explain. If you are in a network of more than three neurons, you are in deep learning.

And in practice that means the algorithm can establish hierarchies for more than three decisions. So, basically it can compute, and then it can multiply. So, if you connect this, you have neural networks that can do something very similar to balancing – which, remember, I said is probably the most… If you had asked me, “Tell me something that machines will not be able to substitute human beings in,” I would say, well, balancing. You have something akin to balancing.

And then you feed them blocks and blocks of information, and this is where I’m as perplexed as you. I think we don’t know enough. This also goes back to the idea that we are inside that revolution, So, it’s very difficult to objectively assess.

But – I don’t know, this is pure intuition – what is the consequence of feeding information that is coded in language, So, it was processed originally as human language within a certain Lebensform, to a machine? I don’t know if we have enough studies, enough knowledge, to be able to say that the machine will, although it’s a machine, let’s use the word interpret – will interpret and use the information in a very similar way to what humans do.

That is: will it create its own culture? So, it’s the lion. And we think that we are understanding what the machine is telling us because it’s speaking in our language, but really, it’s not. It’s speaking in “lion”, and I don’t know if we’re not there. I have no idea. I’m really perplexed.

If someone comes up to me and says, “Look, the machine is talking to you in Portuguese, So, it’s sharing the Lebensform of the Portuguese,” and then, for example – yesterday night, for the listeners, we were talking about something very common when people of two different natural languages meet, which is cursing. Usually that’s the first thing we learn in languages.

And it’s not by chance, because cursing again is very emotional, it’s something that attracts our attention, it’s something very cultural. You can curse someone that understands the language, and even though they understand it, I can say something to someone that knows Portuguese but has never been to Portugal – imagine a human being that has lived in Polynesia all his life but has taken an online course in Portuguese and has never spoken to a Portuguese person. I can insult him in Portuguese, and he will not feel insulted, because cursing is usually one of the most contextual things in a language.

Very much proving Wittgenstein’s point that it’s a cultural thing. So, I don’t know if the problem with algorithms is not the same. They’re using the language – something that we recognise as English or Serbian or Portuguese – and we think, “Well, then we’re sharing the same form of life,” and we’re not. We’re sharing something that we do not understand, and apparently no one can explain. That’s why you use “hallucinations” and stuff like that.

And maybe that’s the point we need – and that’s where again I would say we need some philosophical explanation more than a legal theory explanation. We need a kind of revamp of Wittgenstein, we need a new Kripke or something like that, someone that would revise and adapt the theory to the AI age.

But I think that’s the era we’re living in. We have this idea that because it’s speaking to us in a natural language that we know, we are also sharing the cultural background of the law, and that’s impossible because machines don’t feel – and culture is also, again, I’m now being very emotional with this – but again, passions matter. That’s why cursing is So, appealing to us.

Julieta Rabanos:
I just wanted to say that there are some – I don’t know if they’re “studies”, but I’ve read things, for example on the internet – that people who speak different languages, even if they know that they are being insulted, or if they know insults in a second or third language, they don’t have an emotional connection. It’s not that they don’t understand that they are being insulted, but they don’t feel it.

Which I think is a step more towards your idea that emotions are the things that matter, at least in that sense.

Bojan Spaić:
I’m only thinking about redoing the Searle Chinese Room experiment with curse words, Indonesians and Portuguese people. Just teaching someone to curse and letting him out on the street to see how he feels.

So, unfortunately, for our listeners, we haven’t solved the problem of sentience for artificial intelligence – and not for lack of trying – but we have to stop here because we are way past our run time. I, for one, enjoyed the conversation immensely. So, thank you, Domingos.

Domingos Farinho:
Yeah, me too. Again, thank you for inviting me. This was really, really fun and really thought-provoking. And especially, really friendly. People can be at ease in this podcast, which is something I really cherish. Thank you for having me.

Julieta Rabanos:
Well, thank you very much, and we endeavour to continue with these sci-fi-ish things in the next episode.

 


(*some minor grammatical and changes have been introduced in order to make the reading more fluid, but in no way altering the content or the format of each speaker’s interventions).


The HAPL podcast is powered by the EU Horizon Twinning project “Advancing Cooperation on The Foundations of Law – ALF” (project no. 101079177). This project is financed by the European Union.