Fundamental rights and legal personality of robots: why?
DOI:
https://doi.org/10.18042/cepc/dpc.44.02Abstract
The current trend of attributing legal personality to robots seems to stem from the need to hold them accountable for civil liability arising from potential harm in everyday life, both in the workplace and at home, depending on the context of their intervention. This is due, on one hand, to the possibility of autonomous learning through neural networks that mimic the structure of the human brain, and on the other hand, to the biases that the creators of algorithms may harbor in their perception and understanding of the world. However, it could be argued that the truly relevant aspect in this regard, if these “entities” are eventually considered legal subjects with legal capacity, is the impact it may have on fundamental rights and their protection if the limits are not clearly defined. In this sense, there is a question that precedes all the issues raised around so-called humanoids: ¿Is it really necessary to grant them legal personality to address the legal questions that arise from their involvement in everyday life? ¿Are the “apparent gaps in the law” that have been discussed at length genuine, especially when the harm is caused by a vast set of algorithms capable of making complex decisions as if they were human? Perhaps the distorted and widely spread idea of what Artificial Intelligence is or isn’t has led us to believe that including robots in the category of legal subjects, in need of and endowed with legal personality, is the most suitable position for us as asociety to be prepared for what lies ahead. Presented this way, it sounds truly alarming. However, it is highly likely that we already have everything we need, and it simply requires clarification or understanding, starting from scientific rigor, what AI is, how far it can go, and what new rights it is giving rise to.
