Indecisi0n
Well-Known Member
This is just one of the huge hurdles automation is going to face.
Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?
In the 1940s, American writer Isaac Asimov developed the Three Laws of Robotics arguing that intelligent robots should be programmed in a way that when facing conflict they should remit and obey the following three laws:
Fast-forward almost 80 years into the present, today, Asimov's Three Laws of Robotics represent more problems and conflict to roboticists than they solve.
Roboticists, philosophers, and engineers are seeing an ongoing debate on machine ethics. Machine ethics, or roboethics, is a practical proposal on how to simultaneously engineer and provide ethical sanctions for robots.
Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?
Currently, researchers are following a trend that aims at promoting the design and implementation of artificial systems with embedded morally acceptable behavior.
On ethics and roboethics
Ethics is the branch of philosophy which studies human conduct, moral assessments, the concepts of good and evil, right and wrong, justice and injustice. The concept of roboethics brings up a fundamental ethical reflection that is related to particular issues and moral dilemmas generated by the development of robotic applications.
Roboethics --also called machine ethics-- deals with the code of conduct that robotic designer engineers must implement in the Artificial Intelligence of a robot. Through this kind of artificial ethics, roboticists must guarantee that autonomous systems are going to be able to exhibit ethically acceptable behavior in situations where robots or any other autonomous systems such as autonomous vehicles interact with humans.
Ethical issues are going to continue to be on the rise as long as more advanced robotics come into the picture. In The Ethical Landscape of Robotics (PDF) by Pawel Lichocki et al., published by IEEE Robotics and Automation Magazine, the researchers list various ethical issues emerging in two sets of robotic applications: Service robots and lethal robots.
Service robots are created to peacefully live and interact with humans, whereas lethal robots are created to be sent to fight in the battlefield as military robots.
According to The Ethical Landscape of Robotics, Noel Shanky argues that "the cognitive capabilities of robots do not match that of humans, and thus, lethal robots are unethical as they may make mistakes more easily than humans." And Ronald Arkin believes that "although an unmanned system will not be able to perfectly behave in battlefield, it can perform more ethically that human beings."
In part, the question about the morality of using robots in the battlefield involves commitments on the capability and type of Artificial Intelligence in question.
Robots in the military: Designed to kill and moral accountability
Military robots are certainly not just a thing of the present. They date back to World War II and the Cold War. The German Goliath tracked mines and the Soviet teletanks. Military robots can be used to fire a gun, disarm bombs, carry wounded soldiers, detect mines, fire missiles, fly, and so on.
Today, many other uses for military robots are being developed applying other technologies to robotics. The U.S. military is going to count with a fifth of its combat units fully automated by 2020.
What kind of roboethics are going to be embedded to military robots and who is going to decide upon them? Asimov's laws cannot be applied to robots that are designed to kill humans.
Also in 2020, the U.S. army is going to live test armored robotic vehicles. A demonstration was held in May in Texas.
Roboethics will become increasingly important as we enter an era where more advanced and sophisticated robots as well as Artificial General Intelligence (AGI) are going to become an integral part of our daily life.
Therefore, the debate in ethical and social issues in advanced robotics must become increasingly important. The current growth of robotics and the rapid developments in Artificial Intelligence require roboticists and humans in general to be prepared sooner rather than later.
As the discussion in roboethics advances, some argue that robots will contribute to building a better world. Some others argue that robots are incapable of being moral agents and should not be designed with embedded moral-decision making capabilities.
Finally, perhaps not yet but in the future robots could become moral agents with attributed moral responsibility. Until then, engineers and designers of robots must assume responsibility regarding the ethical consequences of their creations.
In other words, engineers and designers of robots must be morally accountable for what they design and bring out into the world.
Roboethics: The Human Ethics Applied to Robots
Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?
In the 1940s, American writer Isaac Asimov developed the Three Laws of Robotics arguing that intelligent robots should be programmed in a way that when facing conflict they should remit and obey the following three laws:
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Fast-forward almost 80 years into the present, today, Asimov's Three Laws of Robotics represent more problems and conflict to roboticists than they solve.
Roboticists, philosophers, and engineers are seeing an ongoing debate on machine ethics. Machine ethics, or roboethics, is a practical proposal on how to simultaneously engineer and provide ethical sanctions for robots.
Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?
Currently, researchers are following a trend that aims at promoting the design and implementation of artificial systems with embedded morally acceptable behavior.
On ethics and roboethics
Ethics is the branch of philosophy which studies human conduct, moral assessments, the concepts of good and evil, right and wrong, justice and injustice. The concept of roboethics brings up a fundamental ethical reflection that is related to particular issues and moral dilemmas generated by the development of robotic applications.
Roboethics --also called machine ethics-- deals with the code of conduct that robotic designer engineers must implement in the Artificial Intelligence of a robot. Through this kind of artificial ethics, roboticists must guarantee that autonomous systems are going to be able to exhibit ethically acceptable behavior in situations where robots or any other autonomous systems such as autonomous vehicles interact with humans.
Ethical issues are going to continue to be on the rise as long as more advanced robotics come into the picture. In The Ethical Landscape of Robotics (PDF) by Pawel Lichocki et al., published by IEEE Robotics and Automation Magazine, the researchers list various ethical issues emerging in two sets of robotic applications: Service robots and lethal robots.
Service robots are created to peacefully live and interact with humans, whereas lethal robots are created to be sent to fight in the battlefield as military robots.
According to The Ethical Landscape of Robotics, Noel Shanky argues that "the cognitive capabilities of robots do not match that of humans, and thus, lethal robots are unethical as they may make mistakes more easily than humans." And Ronald Arkin believes that "although an unmanned system will not be able to perfectly behave in battlefield, it can perform more ethically that human beings."
In part, the question about the morality of using robots in the battlefield involves commitments on the capability and type of Artificial Intelligence in question.
Robots in the military: Designed to kill and moral accountability
Military robots are certainly not just a thing of the present. They date back to World War II and the Cold War. The German Goliath tracked mines and the Soviet teletanks. Military robots can be used to fire a gun, disarm bombs, carry wounded soldiers, detect mines, fire missiles, fly, and so on.
Today, many other uses for military robots are being developed applying other technologies to robotics. The U.S. military is going to count with a fifth of its combat units fully automated by 2020.
What kind of roboethics are going to be embedded to military robots and who is going to decide upon them? Asimov's laws cannot be applied to robots that are designed to kill humans.
Also in 2020, the U.S. army is going to live test armored robotic vehicles. A demonstration was held in May in Texas.
Roboethics will become increasingly important as we enter an era where more advanced and sophisticated robots as well as Artificial General Intelligence (AGI) are going to become an integral part of our daily life.
Therefore, the debate in ethical and social issues in advanced robotics must become increasingly important. The current growth of robotics and the rapid developments in Artificial Intelligence require roboticists and humans in general to be prepared sooner rather than later.
As the discussion in roboethics advances, some argue that robots will contribute to building a better world. Some others argue that robots are incapable of being moral agents and should not be designed with embedded moral-decision making capabilities.
Finally, perhaps not yet but in the future robots could become moral agents with attributed moral responsibility. Until then, engineers and designers of robots must assume responsibility regarding the ethical consequences of their creations.
In other words, engineers and designers of robots must be morally accountable for what they design and bring out into the world.
Roboethics: The Human Ethics Applied to Robots