Should Robots Have Rights? Why or Why Not?

Medium Reads, Philosophy
Reading Time: 7 minutes

‘A machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer’. [1] The robots of today form a wide spectrum of intelligence, yet even the most advanced are clumsy, easily failing the Turing Test of intelligent behaviour. This does not preclude robots from succeeding at the test in a future world. It is, therefore, a pertinent question whether some future robots should have rights, and at what point should those rights be granted. To answer this question, it must be established what a right is, what the purpose of a right is, and whether robots fit the parameters for having rights. While we have seen cases such as the honorary citizenship of the android Sophia in Saudi Arabia, it will be argued that ability to be morally culpable is a sufficient prerequisite for having rights.


The Hohfeldian Analytical System, one of the most accepted analyses of rights, presents us with four types of rights or ‘Hohfeldian Incidents’. There are First Order Privileges and Claims, and Second Order Powers and Immunities, however, those of the First Order are the only relevant rights to address definitionally for this argument.

A has a privilege to φ if and only if A has no duty not to φ

A has a claim that B φ if and only if B has a duty to A to φ[2]

For example, Alice has a privilege to live because she has no duty to not live, or, Alice has a claim that Bob does not kill her because Bob has a duty not to kill Alice.

We can rule out the absolute cases concerning the rights of robots: that robots should never have rights or that robots should always have rights. It is clear that normatively robots do have privilege-rights: they have the privilege to exist. However, when discussing rights in this context, ‘in the strictest sense’, according to Hohfeld himself, all rights are claims as opposed to another Incident.[3] Nonetheless, through sheer intuition, we would say that the simplest of robots, such as an ‘intelligent’ vacuum cleaner, does not possess the claim-right to existence.[4]

We can equally rule out the other extreme, that the definitional concept of a robot could never have rights. Here we can employ a simple thought experiment: a person has an injury and partly replaces their body with robotic parts, and then another injury and so on until their body is about to be completely gone, at which point they decide to upload their brain into a computer.[5] In this event, this person would retain all cognitive abilities and therefore would presumably maintain at least some rights. However, suppose someone now codes a robot with the same information used to upload the person, it would presumably have the same rights. The only exception would be if rights are given by virtue of being physically human and alive. This begs the question of what being alive is. The argument would say that robots do not qualify as being alive since they are made up of wires, not cells, and were programmed. However, what is the difference here between a human and a robot? ‘Robots have nothing like [an evolutionary and developmental history], they don’t have bones, they don’t have blood, they don’t have any genuine emotions’ as Kerstin Dautenhahn, a researcher in social robotics puts it.[6] While it is evolution that programs us and cells that drive us, for robots, it is code and wires. The fact that one occurs in nature due to physical laws and processes, while one does not seems arbitrary and therefore insufficient to limit the ability of one party to have rights: the access to rights does not depend on whether the individual was materialised authentically or not, as this would exclude, for example, individuals born through IVF.

Subsequently, the question becomes at what point do robots develop claim-rights? Ultimately, the parameters for having claim-rights are intertwined with the function of a right. If the ‘right’ of a robot we are considering does not fulfil the preceding function of a right, it cannot be called a right. To illustrate this point, a ball is placed in front of you and you are asked to determine if it is a football. A football is a ball with which one can play football. If it fails this in any aspect, it cannot be called a football. Therefore, according to preceding ideas of the function of a right, it is necessary to establish whether a robot can possess rights.

There are two primary interpretations of the purpose of a right. We shall call them the ‘systemic’ interpretation and the ‘individualist’ interpretation. ‘Robots will be part of both[…]our ecosystem and our society’ argues Hussein A Abbass, professor of artificial intelligence at the University of New South Wales.[7] The systemic interpretation states that rights are in place in order that a society can function effectively and efficiently; we have rights to stop the ‘nasty, brutish, and short’ nature of life in a Hobbesian state of nature and to provide practical benefits.[8]  For example, western capitalist systems would argue that the right to private property makes society function more efficiently. This interpretation alone implies that intelligent robots should have rights for two reasons. First, to prevent those with power over the robots from taking malicious action against them such as damaging them. The second, and more pressing reason, is the potential consequence of intelligent robots not having rights: an uprising in order to achieve those rights, which is the most probable way that not awarding rights would cause society to decay into a state of nature.

There is also the more popular individualist moral interpretation, for which there are two main theories. The Will Theory states that a right gives control over what people can do, often to the right holder: “The individual who has the right is a small scale sovereign to whom the duty is owed” as Hart puts it.[9] This clearly applies to robots, as a human can still owe a duty towards an ‘intelligent’ non-human. In the status quo, individuals may have duties towards bodies such as companies or society. There is also the Interest Theory, which states that the function of rights is to further the interests of the right holder. As intelligent robots can have interests, this would also seem to apply to them. For example, it makes sense for a robot to have self-preservation as an interest coded into them, and therefore claim-right to existence would further this interest.

Having established that a robot can, in theory, possess rights, we return to our original question: at what point do robots develop these rights? It seems a sufficient prerequisite for having rights is moral agency, and by extension intelligence. That is, if one has agency, one should have rights, while one can still have rights without moral agency, such as a new-born baby. From our definition, the correlative of a right is a duty, and both require moral agency.[10] If an individual can be assigned a duty they are expected to uphold, they can be thought of as a moral agent and therefore deserving of rights.

Can robots be assigned a duty? If a robot were assigned a duty that it subsequently failed, would the manufacturer or the robot be to blame? This is the test to see whether the robot is morally culpable and a moral agent. If the blame is with the manufacturer, the duty is also with the manufacturer, not the robot. Therefore, if we can understand the prerequisites for being able to be blamed for not performing a duty, we can confirm whether a robot should have rights. Ultimately, this comes down to the ability to make decisions and the ability to learn. If a robot lacks the ability to make decisions it does not have agency and cannot be morally culpable, just as we don’t blame someone who is coerced into acting against their will. In order for a robot to be able to make decisions, they must be the robot’s decisions: learnt, as opposed to programmed to do X when Y. Therefore it relies on the ability to learn. Learning not just allows the robot to have a decision-making process, but also to learn from the mistake, a crucial part of taking responsibility for not upholding a duty.

Even with something so simple as a chess AI, it does not seem far-fetched to blame the AI when the move is not optimal. They can observe a game, learn the rules, practice and have a thinking time just like humans, while still not being hard programmed with every possibility of every game. The Chinese Room Argument and similar conjectures presume that robots cannot understand and make decisions like humans. After all, what is a human decision? It is split up into innate knowledge and empirical knowledge. Innate knowledge mirrors exactly that of a robot with certain bits of hard code: when we are faced by a predator, we have signals that override our rationality and tell us to fight or take flight. Empirical knowledge is even more perfectly mirrored by robots who learn in a similar way. We can, therefore, conclude that robots who have learned to such an extent that their ‘robotness’ is no longer apparent can make decisions, learn and are ultimately intelligent. It is this specific form of intelligence that permits them to be morally culpable and therefore qualify for both duties and rights.[11]


[1] Definition of ‘robot’. Oxford English Dictionary

[2], Wenar, Leif, “Rights”, The Stanford Encyclopedia of Philosophy (Spring 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2020/entries/rights/>.

[3] Hohfeld, W. N. (1913). Some fundamental legal conceptions as applied in judicial reasoning. New Haven, Conn,: Yale Law Journal.

[4] I use the example of ‘existence’ as opposed to ‘life’ as it would only draw away from the central idea.

[5] It is important to note that this is theoretically possible through information theory, in that, a brain and its makeup contains information that ultimately can completely describe it and its processes.

[6] Dautenhahn, K. (2017, April 19). Robots are not human, even if we want them to be | Kerstin Dautenhahn | TEDxEastEnd. Retrieved from https://youtu.be/wPK2SWC0kx0

[7] Sigfusson, L., &amp; Abass, H. A. (2020, May 23). Do Robots Deserve Human Rights? Retrieved from https://www.discovermagazine.com/technology/do-robots-deserve-human-rights

[8] Hobbes, T. (1996). Hobbes: “Leviathan”. Cambridge: Cambridge University Press.

[9] Hart, H. L. (1982). Essays on Bentham: Studies in jurisprudence and political theory. Oxford: Clarendon Press.

[10] If A has a claim-right over B, B has a duty to A

[11] Such as through succeeding in the Turing Test

Bibliography

Wenar, Leif, “Rights”, The Stanford Encyclopedia of Philosophy (Spring 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2020/entries/rights/>.

Steiner, Hillel. “Directed Duties and Inalienable Rights.” Ethics, vol. 123, no. 2, 2013, pp. 230–244. JSTOR, www.jstor.org/stable/10.1086/668708.

Wellman, Carl. “The Functions of Rights.” ARSP: Archiv Für Rechts- Und Sozialphilosophie / Archives for Philosophy of Law and Social Philosophy, vol. 97, no. 2, 2011, pp. 169–177. JSTOR, www.jstor.org/stable/23680967.

Wenar, Leif. “The Nature of Rights.” Philosophy & Public Affairs, vol. 33, no. 3, 2005, pp. 223–252. JSTOR, www.jstor.org/stable/3557929.

Dautenhahn, K. (2017, April 19). Robots are not human, even if we want them to be | Kerstin Dautenhahn | TEDxEastEnd. Retrieved from https://youtu.be/wPK2SWC0kx0

Hart, H. L. (1982). Essays on Bentham: Studies in jurisprudence and political theory. Oxford: Clarendon Press.

Hobbes, T., Tuck, R., Geuss, R., & Skinner, Q. (1996). Hobbes: “Leviathan”. Cambridge: Cambridge University Press.

Hohfeld, W. N. (1913). Some fundamental legal conceptions as applied in judicial reasoning. New Haven, Conn,: Yale Law Journal.

Sigfusson, L., & Abass, H. A. (2020, May 23). Do Robots Deserve Human Rights? Retrieved from https://www.discovermagazine.com/technology/do-robots-deserve-human-rights

Chinese room. (2021, January 26). Retrieved January 31, 2021, from https://en.wikipedia.org/wiki/Chinese_room#Chinese_room_thought_experiment

One thought on “Should Robots Have Rights? Why or Why Not?

Leave a Reply