Christopher Lau
Dr. Michael Weems
English 100
23 April 2017
Today, Artificial intelligence (AI) plays an important role in humanity’s future. AI is surpassing human intelligence slowly. This is just like the story of Frankenstein’s monster. Dr. Frankenstein created a monster that he had no control of. It learned by itself over time, realized his hatred for his creator and came back to kill him. In this essay however, Dr. Frankenstein is humanity and the AI we create are the monsters. This paper will show that AI do not need or deserve rights because of the danger that AI can pose, how AI can take advantage of those rights in a negative way, and expose the negative consequences of giving them rights.
The foundation of AI starts with the world’s first computer: the Electronic Numerical Integrator and Computer. “In 1942, ENIAC was an idea proposed by physicist John Muchly as a solution to fast and complex calculations during the war. During 1943-1945, ENIAC was officially built and was the first computer to fun at electronic speed without being slowed by any mechanical parts. It may have run more calculations in the history of mankind until 1955” (Computer History Museum).
Another foundation comes from the idea of Machine Learning. Arthur Samuel, a pioneer in AI, coined the term Machine Learning. Machine learning is when a system automatically learns programs from data. This is often a very attractive alternative to constructing programs manually. In the last decade the use of machine learning has spread rapidly throughout computer science and beyond. Machine learning is used in Web search, spam filters, recommender systems, ad placement, credit scoring, fraud detection, stock trading, drug design, and many other applications including AI such as Siri, Jibo, or HitchBot (Domingos).
Siri is an AI that is programmed into the iPhone by Apple. It is a voice recognition AI that can make everyday actions easier such as booking an appointment, set an alarm, or just needing knowledge at anytime (IOS - Siri). HitchBot is very different from Siri because it is a social robot. This means it is has all the traits of a sociable person but less personality. HitchBot was designed by Doctors David Harris Smith and Frauke Zellerto. Its purpose is to interact with people to get from one place to another. Basically, it explores the world, wanting to make new friends along the way (About HitchBot). Jibo will mentioned more later in the essay.
Before these advanced AI, people like Alan Turing tried to write chess programs that would be able to learn on their own and surpass humans. In 1942, Turing and his undergraduate colleague Champernowne wrote the first chess program. Then a few years later, they tried to run the program on an extremely slow computer. It worked but could only play one move every half an hour. In this time period, chess was difficult for AI to grasp and the more proficient chess AI were developed in the 1960s (Nilsson 123).
Towards the late 1950s Arthur Samuel made a checkers program that could learn on its own. It was where Samuel developed the term Machine Learning. About 5 five years later, it played extremely well, “beating Robert Nealey, a blind checkers master from Connecticut. In 1965, the program played four games each against Walter Hellman and Derek Oldbury (then playing a match for the World Championship), and lost all eight games” (Nilsson 128).
Although these are all interesting and amazing discoveries, they does not mean that AI deserve rights. AI has the potential to be more dangerous than what they seem to be. Does a chess AI that can get increasingly smarter deserve or need rights? What happens if we can get the same AI and let it command our Navy? Here is what could happen: today we have Unmanned Air Vehicles that use AI to move around like when using auto pilot. More specifically, AI is used for war, autonomous weapons, and killings. Drones are used to kill terrorists in the Middle East. But when the AI starts to learn and get more advanced intelligence it may start killing innocent people or committing crimes such as property damage. This unpredictability is dangerous because AI do not have any morals or feelings. “The local, specific behavior of the AI may not be predictable apart from its safety, even if the programmers do everything right.” (Bostrom).
Furthermore, if AI does kill someone, they cannot be prosecuted the same way as humans are. When prosecution occurs, prosecution is suppose to punish the person or protect others from their harm. But AI do not have emotion, so what is the point of that if they do not fear doing wrong? They can just keep doing it and they would not care. Since the advanced AI can be unpredictable, that means that there is no way anyone can control its actions. So, if the AI goes on its way, does the blame go to the program’s creator even though it was out of the creator’s control? Possibly. Involuntary manslaughter is the legal term for accidental murders. However, it does not seem fair to blame the creator. As Nathan Heller of The New Yorker asked: “What would an autonomous vehicle do if it must choose between swerving into a crowd of ten people or slamming into a wall, killing its owner?” (Heller).
What happens if we do give them rights? A social robot called Jibo is a prime example of what a robot with rights would look like. Jibo can socialize with people, get to know the family, and continuously learn. A robot like this is very similar to a human, prompting people to not treat like property but more like a friend or companion (Jibo Robot). Dr. Kate Darling is a Research Specialist at the MIT Media Lab and a Fellow at the Harvard Berkman Center. In her paper, Extending Legal Protection to Social Robots, she brings up great points on why social robots like jibo should be given rights.
“At first glance, it seems hard to justify differentiating between a social robot, such as a Pleo dinosaur toy, and a household appliance, such as a toaster… Yet there is a difference in how we perceive these two artifacts. While toasters are designed to make toast, social robots are designed to act as our companions. So what is the difference to other objects? It is true that humans have always been susceptible to forming attachments to things that are not alive, for example to their cars or stuffed animals. People will even become attached to virtual objects. In the video game Portal, for example, when players are required to incinerate the companion cube that has accompanied them throughout the game, some will opt to sacrifice themselves rather than the cube, forfeiting their victory. One factor that may play a significant role in the development of such unidirectional relationships to objects is a psychological caregiver effect. For example, Tom Hanks develops a relationship with a volleyball in the movie Cast Away. The interesting aspect of his attachment is nicely demonstrated when he inadvertently lets the volleyball float out to sea. Realizing that he is unable to rescue his companion, he displays deep remorse for not taking better care. The focus thereby is not on his personal loss, but rather on his neglected responsibility toward the object: he calls out to it that he is sorry.”
In this excerpt, Darling gives great examples, such as popular culture like Cast Away and attachment to virtual objects. There will be a time where humans will also be attached to AI in social robots like Jibo. However, by giving them rights, one can take advantage of that. First, if someone buys a Jibo and gets extremely attached to it, they may try to acquire many more of the same robot to feel happier. There will be a point where someone acquires too much and starts to neglect some of the robots. If the robots do have rights, the person can be charged just like how people are charged today for animal abuse. Secondly, if someone had the same kind of attachments as Tom Hanks did with the volleyball, they could choose the inanimate object over another living person. For example, in popular culture, there are people that give up other people for something like money. The same can happen with people that have have strong attachments with AI. Lastly, it is legal for people to buy a social robot that has rights and dismantle the robot? No, if the robot had rights if would be considered murder if the robot does not respond. There will not be any difference between killing a human baby and an AI robot. There will be a difference in emotional appeal but not the charge of crime.
“About HitchBOT.” HitchBOT. N.p., n.d. Web. 29 Apr. 2017. http://mir1.hitchbot.me/about/.
Bostrom, Nick, and Eliezer Yudkowsky. “The Ethics of Artificial Intelligence.” The Cambridge Handbook of Artificial Intelligence (n.d.): 1-20. 2011. Web. 27 Apr. 2017.http://www.nickbostrom.com/ethics/artificial-intelligence.pdf.
“Computer History Museum” ENIAC. N.p., n.d. Web. 26 Apr. 2017. http://www.computerhistory.org/revolution/birth-of-the-computer/4/78.
Darling, Kate. “Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects.” By Kate Darling :: SSRN. N.p., 28 Aug. 2012. Web. 29 Apr. 2017.
Dockrill, Peter. “Controversial AI Has Been Trained to Kill Humans in a Doom
Deathmatch.” ScienceAlert. N.p., 1 Oct. 2016. Web. 28 Apr. 2017. http://www.sciencealert.com/controversial-ai-has-been-trained-to-kill-humans-in-a-dom-deathmatch.
Domingos, Pedro. “A Few Useful Things to Know about Machine Learning.” University of Washington, n.d. Web. 27 Apr. 2017. https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf.
Heller, Nathan. “If Animals Have Rights, Should Robots?” The New Yorker. The New Yorker, 17 Nov. 2016. Web. 28 Apr. 2017. http://www.newyorker.com/magazine/2016/11/28/if-animals-have-rights-should-robot. “IOS - Siri.” Apple. Apple, n.d. Web. 29 Apr. 2017.
“Jibo Robot - He Can’t Wait to Meet You.” Jibo. N.p., n.d. Web. 01 May 2017. https://www.jibo.com/.
Knight, Heather. “How Humans Respond to Robots: Building Public Policy through Good Design | Brookings Institution.” Brookings. Brookings, 23 Aug. 2014. Web. 28 Apr. 2017. https://www.brookings.edu/research/how-humans-respond-to-robots-building-public-policy-through-good-design/.
Nilsson, Nils J. The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge: Cambridge UP, 2010. Print.
Omohundro, Stephen M. “The Basic AI Drives.” N.p., n.d. Web. 29 Apr. 2017. https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf.
Press, Gil. “A Very Short History Of Artificial Intelligence (AI).” Forbes. Forbes Magazine, 30 Dec. 2016. Web. 28 Apr. 2017. https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/3/#2cb11e22be7c.
Russell, Stuart. “2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?”Edge.org. Edge, 2015. Web. 29 Apr. 2017. https://www.edge.org/response-detail/26157.