thumb

You are reading:

Júlia Pareto: “The great ethical risk of AI is the abdication of human freedom”

Júlia Pareto, who holds a PhD in Philosophy, is one of the thinkers exploring the challenges of this technology from an ethical perspective.
Júlia Pareto, who holds a PhD in Philosophy, is one of the thinkers exploring the challenges of this technology from an ethical perspective.© "la Caixa" Foundation

Júlia Pareto:  “The great ethical risk of AI is the abdication of human freedom”

Barcelona

20.09.24

6 minutos de lectura
Available resources

Send your questions to:

Júlia Pareto Boada

Júlia Pareto Boada holds a PhD in Philosophy from the Universitat de Barcelona and is a researcher at the AI&Democracy Chair (STG-EUI) and the Institute of Robotics and Industrial Informatics, CSIC-UPC.

Request interview

Recent advances in artificial intelligence (AI) are raising expectations about its potential, but also debates about its risks. Júlia Pareto, who holds a PhD in Philosophy, is one of the thinkers exploring the challenges of this technology from an ethical perspective. She works at the European University Institute in Florence and at the Institut de Robòtica i Informàtica Industrial (CSIC-UPC), researching the development of these systems, especially in the social and healthcare fields. On 30 April she took part in the debate What new ethical and legal challenges does artificial intelligence pose?, at the Palau Macaya. 

What are the main ethical challenges posed by AI?

The big challenge, and I would put it in the singular, is to decide why and for what purpose we’re going to develop this technology, what the reasons for its deployment are. Taking into account the risks of its technological autonomy, the reflection is very much focused on the “doing” of these artificial agents, on whether their behaviour is in line with certain values. But that forgets that the first question should focus on their “being”, that is, on the issue of their legitimacy, which implies turning our attention to the interests and purposes they’re supposed to serve. Ethics has the task of constructing meaning. I believe it’s necessary to reduce the noise around the anecdotal and take some time to think about what we’re doing and where we’re going with AI systems, which inevitably leads us to revisiting fundamental themes of the philosophical tradition. 

In your work you also point out that ethics should not be confused with morality.
Júlia Pareto works at the European University Institute in Florence and at the Institut de Robòtica i Informàtica Industrial (CSIC-UPC)
Júlia Pareto works at the European University Institute in Florence and at the Institut de Robòtica i Informàtica Industrial (CSIC-UPC)© "la Caixa" Foundation

Popularly, ethics and morality are considered synonymous, but from a philosophical point of view there’s a very important distinction between the two concepts, which is worth remembering now that ethics is being called upon in various practical contexts to assist in decision-making. Some experts in other disciplines, such as engineering, engage in dialogue with philosophers and expect them to tell them what to do. And that’s something that pertains to morality, which is linked to socially accepted values or norms. Ethics reflects on the why, and the response it offers is an argument. This is an interesting distinction: morality is about actions, ethics is about reasons. They’re two very different areas of practical life.

What should an ethics that encompasses the complexity of AI look like?

We need to move away from the idea of general ethical reflection and try to narrow it down to specific fields of practice or activities. Reflection should not be decontextualised from the specific area of action that the technology serves, from the framework of purposes and values of the practices for which it is conceived as an instrument. Historically, we’ve moved from an ethics of technology with a capital “T” to an ethics of specific technologies: the ethics of robotics, nanotechnology, computing... We should now make a final hermeneutic shift towards the activities that these technologies serve, so as not to remain in a discourse that focuses on the instrument without paying attention to its teleologically subordinate nature.

You are an expert in social and assistive robotics. How is this technology transforming care?

Today, technology can perform tasks that used to be reserved exclusively for human agency, as they require a certain level of personal interaction. Now, we’ve embodied AI systems that could take on roles in healthcare or education that require interaction through speech and gestures. The novelty does not lie in the technological mediation of these activities, but in the transformation of the nature of this mediation: given the ability of these robots to interact with humans as quasi-others, we can introduce them into the more nuclear dimension of relational practices such as caring. 

Júlia Pareto: "If we begin to delegate tasks to machines without addressing these questions, we’ll be undermining our condition as autonomous beings". 
Are we talking about replacing carers with robots?

The European deployment of robotics for care maintains a continuity with the traditional paradigm under which robots are conceived as tools for tasks that are dirty, boring or dangerous. The narrative is that robots serve to increase the quality of care, which does not mean replacing health professionals, but using robots to relieve them of tasks that are less significant in terms of human value (because they’re heavy, repetitive and mechanical, such as feeding, helping to dress or assisting in physical and cognitive exercise activities). This way, these professionals will be able to focus on what would be the most intersubjective part of care relationships. From an ethical point of view, this technological policy must be accompanied by a hermeneutic reflection on care, on the values and the purposes of this practice, which ultimately has to do with world-making. If we begin to delegate tasks to machines without addressing these questions, we’ll be undermining our condition as autonomous beings. I believe that the great ethical risk of AI is the abdication of human freedom.

To what extent would care robots harm the people they care for?

When people think about how care robots can harm people, they tend to start from an idea of care as a practice that only has a private dimension, and focus on the dehumanisation or lack of respect for human dignity that interaction with these robots can entail. But care is also political. A lot of progress has been made in feminist philosophy and ethics in this regard, and now we seem to be going backwards. It’s important that we understand care as something that involves power relations and responsibilities that have to be distributed among citizens, and not just focus normative-ethical attention solely on the fact that the robot cannot act in the same way as a human being. Just as we should not forget that technology plays a constitutive role in these power relations. 

Statements by Júlia Pareto, PhD in Philosophy.© "la Caixa" Foundation
How do you think AI can affect social inequality?

I believe we should not fall into a certain naiveté and think that these technologies are being developed in a context that is completely free of economic interests or where there’s no competition on a geopolitical scale. We cannot fall back on the prejudices of the past and forget that technology has a moral and political dimension and thus contributes to the socio-political shaping of human life. This is why it’s so important that the public-private debate is conducted, and conducted well, so that the adoption of these tools, which are often largely developed by private companies, does not clash with the values pursued from a public service point of view. 

What role should ethics play in this scenario of technological disruption and the efforts being made to regulate it?

Technological innovation is ahead and the law is racing to catch up, but it always arrives late. This is normal, because the law has to capture and consolidate what is worth becoming moral, that is, the values we want to defend. Ethics has the agility and flexibility to reflect and help the law to ground its norms. From the perspective of ethics we should continue to think not so much about setting limits, but about accompanying the process in a proactive way, thinking about what kind of societies, relationships, socio-political structures we’re going to build and helping to materialise these concepts. That has always been the nature of ethics.

Latest Update: 20 September 2024 | 15:48