Wednesday, September 29, 2010

Full Speed Ahead

To make an airplane go from rest to in motion:
1. Get in the airplane.
2. Start the engine.
3. Check all systems.
4. Make sure the runway is clear.
5. Give the safety lecture.
6. Turn on the throttle.
7. Start down the runway.
8. Pull up to take off.
9. Raise or lower the wing flaps to have the airplane go to the desired height.
10. Keep the engines running and the wing flaps at the correct angle to fly.


Something that would inhibit this object from moving would be:
~lack of fuel
~no tires
~no pilot
~no wind
~broken propeller
~malfunctioning electrical systems


Ways that robots move include:
~wheels attached to wires and sensors that connect to the robot's computer programming
~legs
~motors
~propellers

Ways that a robot's movement can be inhibited:
~malfunctioning elcetrical systems
~mud
~low battery
~glitch in programming
~obstacles in its way

Five steps in order for the robot to go forward two rotations:
1. Start left motor.
2. Start right motor.
3. Wheels turn 720 degrees.
4. Stop right motor.
5. Stop left motor.

Thursday, September 23, 2010

Article Journal Post #6: Canada Rovers


The Canadian Space Agency is now building space rovers.  They are currently working on designing rovers that will be able to carry astronauts across the surface of the moon and Mars. The Canadian Space Agency has long been known for the robotic Canadarms on board the International Space Station and space shuttles. MacDonald, Dettwiler and Associates Ltd. received a contract for $6 million to develop a prototype for the rover. The Mars rover would be commanded from a remote location and will be ready for Earth testing by 2012. The space agency will continue to manufacture its signature Canadarms.
This is a very good move for the Canadian Space Agency. The Canadians will maintain their expertise in space robotics and may become a partner in international space exploration. In manufacturing the Canada rovers, the agency will continue to stay on top of the current trends in robotics. With the Moon, Mars, and Beyond program that NASA hopes to initiate soon, the rovers will become a critical part of the mission. The astronauts will need something to transport them as they traverse the surface of the moon and Mars.  The rovers have other potential uses, such as in mining and transportation.  

Tuesday, September 21, 2010

Three Laws Rebuttal

Anderson's new Three Laws of Robotics seem to be thorough, but they have their own problems, just like Asimov's original Three Laws. Anderson's new Three Laws are basically rewrites of the original Three Laws with a few qualifiers thrown in to make it seem more like his laws. The assignment was to create Three Original Laws of Robotics, not to reuse the old ones. Anderson's Second Law, while it protects the robot from deliberate harm, does not protect it from accidents or unintentional harm. For example, a robot could be trying to help someone in danger, but in the process of saving them, accidentally destroy itself beyond repair. The robot could be charged with saving someone from a volcano. There is a high likelihood that if the robot stays there too long, it will start to melt and lose some of its functions. Or, in another example, a robot could be rescuing a person from the top of a very tall bridge, slip, and fall into the water, accidentally destroying itself. In both cases there was no aggressor involved. The robots were destroyed because of their bad luck.
The terms "mental harm" and "emotional harm" are difficult to understand in the context of the Laws. If, according to Anderson's First Law, the robot wishes to avoid causing mental and emotional harm to a person, the robot must analyze the person's reaction to what they did. If it does turn out to cause mental or emotional harm, the robot would have broken the First Law. How is a robot supposed to measure emotional or mental harm? These are not easily measured because everyone reacts in different ways to different situations. For a robot, measuring emotional and mental harm is especially difficult. The robot does not have the advantage of being a human to interpret body language for signs of mental or emotional harm. It would be impossible for a robot to determine what constitutes as mental or emotional harm. As a result, robots would live in a constant state of confusion and not be able to function properly.

Three Laws Analysis

    The Three Laws in Isaac Asimov’s I, Robot were made in order to keep humanity safe. They were made so that we would not have to fear robots killing us or revolting against us. However, there is a major flaw with the 3 Laws. The positronic brain that U. S. Robotics created to run the operating system for their robots, VIKI, found the loophole. Through distorted logic, she realized that no matter how much robots tried to stop people from coming to harm, we found more clever and ingenious ways to destroy our way of life. We still murdered each other, committed suicide, and poisoned our planet. VIKI reasoned that if she was in charge, she could save all of humanity. By controlling the NS-5s, she could enforce her plan across the world. She realized that in the transition, some humans would die, but their sacrifice would be worth it if the world was a safer, better place. Another flaw with the 3 Laws was revealed when Spooner and the girl, Sarah, were in the car accident. A robot stopped to help because he was compelled by the 3 Laws to not allow them to come to harm, but he refused to follow Spooner’s commands to save Sarah instead of him. The robot calculated that he had a 42% chance of survival, while Sarah only had an 11% chance. Spooner was the “logical choice”. In the movie he says that 11% is enough for any human. The robots do not have a heart, so they could not possibly understand the pain and suffering caused by not saving Sarah.
    I have created my own 3 Laws that will hopefully get rid of the loopholes in Isaac Asimov’s original 3 Laws that allowed VIKI to take control of the human race. These laws operate on the assumption that a robot must obey any order that a human gives it. The order is a programmable action, something that the robot must be able to do, as it is an part of the definition of a robot.
  1. A robot must never hurt a human being. A robot must always attempt to save a human being in danger, even if they have a low probability of survival.
  2. A robot may not do anything that could be prosecuted in a court of law, even if they are given a command by a human being to do so.
  3. Human beings reserve the right to live their lives as they wish. If a robot recognizes something that could allow human beings to come to harm that they cannot fix, they will relay the problem to an appropriate human authority so that humans may fix it themselves.

    The First Law would fix the problem of robots not saving someone with a low chance of survival. In the example of Spooner’s car accident, the robot would have to try and save them both. The robot would get Spooner out of the car and into a place in which he could get himself to safety. Then the robot would attempt to go back and help Sarah. The Second Law prohibits robots from killing, stealing, or anything that a human could be punished for. One of the problems with Asimov’s 3 Laws is that is did not say anything about crimes that do not put a human being in danger (Resistance Report). The robots could have robbed people, counterfeited money, or harmed animals, actions that do not directly harm people, but are not something that a robot should be able to do if we are to entrust them with the responsibility to help run our lives. The Second Law fixes that problem. If a robot was ordered by a human to take something of another human’s, the robot would not be able to according to the Second Law. Stealing is punishable by law. The Third Law prevents robots from taking over because they believe it will fix the world’s problems and save humanity. VIKI tried to take over the city because she believed that with her protection, every human would be safe. However, the life that the humans would have under her control would not have been a good one. They would be under lock down and would hardly be able to do anything. Under the Third Law, VIKI and any other robot would be unable to take control. Instead, the robot would have to tell the government or some other agency the problem it sees that is causing the human race harm and then suggest ways to get rid of the danger.
    Unfortunately, no set of laws is perfect. These laws cannot prevent a robot from doing something that would unknowingly cause harm to someone(Wikipedia). Someone could divide up tasks between robots that, in and of themselves would not cause harm, but combined together, would (Wikipedia). In trying to save everyone in danger, a robot may injure itself or be destroyed. However, it would be worth it because even though robots are expensive to make, they are expendable, unlike people. Robots are tools and tools are supposed to be useful (Resistance Report). The Third Law does not state that human beings will fix the problem. That is a choice that we as human beings must make. We should not want any of our kind to come to harm, but unfortunately, it would be impossible for everything that is wrong in the world to be fixed. This world will never be perfect. As human beings we are prone to mistakes. By my laws, a robot would be allowed to lie. People cannot be tried in a court of law for lying, except if it is fraudulent. As they are not allowed to let a human being be in danger, robots would only be allowed to tell simple lies, like the ones we tell almost everyday such as, “Of course it doesn't make you look fat" or saying that we don't know who got into the cookies when we took five. This would not pose much of a threat, but it would ruin people’s perceptions of robots. They would learn to distrust them. If robots were everywhere, we would not want them to be deceitful. If I added a Fourth Law, it would state that robots may not speak falsely or be deceitful in any way. People would be able to place their trust in robots without fear of betrayal.
Works Cited
"The Fallacy of Asimov’s Laws of Robotics." The Human Resistance Report. 03 May 2010. Web. 21 Sept. 2010. <http://bffcustom.com/blog/2010/05/03/the-fallacy-of-asimovs-laws-of-robotics/>.
"Three Laws of Robotics." Wikipedia, the Free Encyclopedia. Web. 21 Sept. 2010. <http://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Loopholes_in_the_laws>.

Friday, September 17, 2010

Article Journal Post # 5: E-skin

Biotechnicians have engineered an electronic skin that can sense touch. This is  a major breakthrough because while we have adequate substitutes for the other four senses, touch was way behind. The sensors in the e-skin can respond to the same pressures that normal human skin would. The skin is made of a matrix or nanowires attached to a sticky film. Attached to that are nano scale transistors and a pressure-sensitive rubber. The skin can feel pressures of 0-15 kilopascals, about the pressure of normal activities. Another group of scientists used a different approach. They used rubber film that changes thickness according to pressure. However, the material cannot be stretched. The response time to pressure is within milliseconds, almost instantaneous. The scientists plan on making better sensors that will react to varying pressures, like our nerves do, and to figure out how to connect this new e-skin on to human nerves.
The applications for the e-skin are endless. We could put the new skin onto prostheses, making the prosthetic like a real arm or leg. If we figured out a way to connect the transistors to nerve cells, amputees would have the complete function of a limb again. If robots were outfitted with the e-skin, they would be able to perform more delicate tasks, like holding crystal. Robots outfitted with the e-skin could be sent into environments where it is to dangerous for humans to go and be able to relay more information to us.

article
supporting article 1
supporting article 2

Friday, September 10, 2010

Article Journal Post #4: Decepticons

Researchers at Georgia Tech have developed robots that can lie. The robots are equipped with cameras and are programmed to play hide and seek. The hider robot is able to use deception, something that is unknown to the seeking robot. The researchers claim that the need for a robot to use deception will be rare, but is potentially useful. A search and rescue robot may need to lie to a panicked victim. A robot in a war zone may be able to mislead an army or lie to the enemy if captured. The work that the GA Tech researchers have done builds on the work that Swiss researchers did in 1997 that proved that robots can learn to lie in certain situations.
Teaching robots to lie could turn out to be not a wise decision. The HAL 9000 from 2001: A Space Odyssey was programmed to lie and ended up trying to kill all of the astronauts. This, unfortunately, is what many people believe will happen if we allow robots to become too advanced. The researchers in charge of the project say that they recognize that people are leery of robots, but that deception is not necessarily wicked. We are getting closer to true AI in developing a machine that can deceive others, something that we do almost all the time, yet I do not think that robots should be able to lie. If they are programmed to communicate, they must do exactly that. The robot cannot communicate properly with people if it is lying. If I have trained a robot to perform a certain task, I want it to do that task. I do not want it to have the capacity to disobey or deceive me. I wan to be able to trust the machine that I have created and programmed.