Tuesday, September 21, 2010

Three Laws Analysis

    The Three Laws in Isaac Asimov’s I, Robot were made in order to keep humanity safe. They were made so that we would not have to fear robots killing us or revolting against us. However, there is a major flaw with the 3 Laws. The positronic brain that U. S. Robotics created to run the operating system for their robots, VIKI, found the loophole. Through distorted logic, she realized that no matter how much robots tried to stop people from coming to harm, we found more clever and ingenious ways to destroy our way of life. We still murdered each other, committed suicide, and poisoned our planet. VIKI reasoned that if she was in charge, she could save all of humanity. By controlling the NS-5s, she could enforce her plan across the world. She realized that in the transition, some humans would die, but their sacrifice would be worth it if the world was a safer, better place. Another flaw with the 3 Laws was revealed when Spooner and the girl, Sarah, were in the car accident. A robot stopped to help because he was compelled by the 3 Laws to not allow them to come to harm, but he refused to follow Spooner’s commands to save Sarah instead of him. The robot calculated that he had a 42% chance of survival, while Sarah only had an 11% chance. Spooner was the “logical choice”. In the movie he says that 11% is enough for any human. The robots do not have a heart, so they could not possibly understand the pain and suffering caused by not saving Sarah.
    I have created my own 3 Laws that will hopefully get rid of the loopholes in Isaac Asimov’s original 3 Laws that allowed VIKI to take control of the human race. These laws operate on the assumption that a robot must obey any order that a human gives it. The order is a programmable action, something that the robot must be able to do, as it is an part of the definition of a robot.
  1. A robot must never hurt a human being. A robot must always attempt to save a human being in danger, even if they have a low probability of survival.
  2. A robot may not do anything that could be prosecuted in a court of law, even if they are given a command by a human being to do so.
  3. Human beings reserve the right to live their lives as they wish. If a robot recognizes something that could allow human beings to come to harm that they cannot fix, they will relay the problem to an appropriate human authority so that humans may fix it themselves.

    The First Law would fix the problem of robots not saving someone with a low chance of survival. In the example of Spooner’s car accident, the robot would have to try and save them both. The robot would get Spooner out of the car and into a place in which he could get himself to safety. Then the robot would attempt to go back and help Sarah. The Second Law prohibits robots from killing, stealing, or anything that a human could be punished for. One of the problems with Asimov’s 3 Laws is that is did not say anything about crimes that do not put a human being in danger (Resistance Report). The robots could have robbed people, counterfeited money, or harmed animals, actions that do not directly harm people, but are not something that a robot should be able to do if we are to entrust them with the responsibility to help run our lives. The Second Law fixes that problem. If a robot was ordered by a human to take something of another human’s, the robot would not be able to according to the Second Law. Stealing is punishable by law. The Third Law prevents robots from taking over because they believe it will fix the world’s problems and save humanity. VIKI tried to take over the city because she believed that with her protection, every human would be safe. However, the life that the humans would have under her control would not have been a good one. They would be under lock down and would hardly be able to do anything. Under the Third Law, VIKI and any other robot would be unable to take control. Instead, the robot would have to tell the government or some other agency the problem it sees that is causing the human race harm and then suggest ways to get rid of the danger.
    Unfortunately, no set of laws is perfect. These laws cannot prevent a robot from doing something that would unknowingly cause harm to someone(Wikipedia). Someone could divide up tasks between robots that, in and of themselves would not cause harm, but combined together, would (Wikipedia). In trying to save everyone in danger, a robot may injure itself or be destroyed. However, it would be worth it because even though robots are expensive to make, they are expendable, unlike people. Robots are tools and tools are supposed to be useful (Resistance Report). The Third Law does not state that human beings will fix the problem. That is a choice that we as human beings must make. We should not want any of our kind to come to harm, but unfortunately, it would be impossible for everything that is wrong in the world to be fixed. This world will never be perfect. As human beings we are prone to mistakes. By my laws, a robot would be allowed to lie. People cannot be tried in a court of law for lying, except if it is fraudulent. As they are not allowed to let a human being be in danger, robots would only be allowed to tell simple lies, like the ones we tell almost everyday such as, “Of course it doesn't make you look fat" or saying that we don't know who got into the cookies when we took five. This would not pose much of a threat, but it would ruin people’s perceptions of robots. They would learn to distrust them. If robots were everywhere, we would not want them to be deceitful. If I added a Fourth Law, it would state that robots may not speak falsely or be deceitful in any way. People would be able to place their trust in robots without fear of betrayal.
Works Cited
"The Fallacy of Asimov’s Laws of Robotics." The Human Resistance Report. 03 May 2010. Web. 21 Sept. 2010. <http://bffcustom.com/blog/2010/05/03/the-fallacy-of-asimovs-laws-of-robotics/>.
"Three Laws of Robotics." Wikipedia, the Free Encyclopedia. Web. 21 Sept. 2010. <http://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Loopholes_in_the_laws>.

No comments:

Post a Comment