Roboethics: The Human Ethics Applied to Robots

Indecisi0n

Well-Known Member
This is just one of the huge hurdles automation is going to face.

Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?

In the 1940s, American writer Isaac Asimov developed the Three Laws of Robotics arguing that intelligent robots should be programmed in a way that when facing conflict they should remit and obey the following three laws:

  • A robot may not injure a human being, or, through inaction, allow a human being to come to harm

  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Isaac Asimov’s Laws of Robotics were first introduced in the short science fiction story Runaround, (PDF) published in the March, 1942 issue of Astounding Science Fiction.

Fast-forward almost 80 years into the present, today, Asimov's Three Laws of Robotics represent more problems and conflict to roboticists than they solve.

Roboticists, philosophers, and engineers are seeing an ongoing debate on machine ethics. Machine ethics, or roboethics, is a practical proposal on how to simultaneously engineer and provide ethical sanctions for robots.

Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?

Currently, researchers are following a trend that aims at promoting the design and implementation of artificial systems with embedded morally acceptable behavior.

On ethics and roboethics
Ethics is the branch of philosophy which studies human conduct, moral assessments, the concepts of good and evil, right and wrong, justice and injustice. The concept of roboethics brings up a fundamental ethical reflection that is related to particular issues and moral dilemmas generated by the development of robotic applications.

Roboethics --also called machine ethics-- deals with the code of conduct that robotic designer engineers must implement in the Artificial Intelligence of a robot. Through this kind of artificial ethics, roboticists must guarantee that autonomous systems are going to be able to exhibit ethically acceptable behavior in situations where robots or any other autonomous systems such as autonomous vehicles interact with humans.

Ethical issues are going to continue to be on the rise as long as more advanced robotics come into the picture. In The Ethical Landscape of Robotics (PDF) by Pawel Lichocki et al., published by IEEE Robotics and Automation Magazine, the researchers list various ethical issues emerging in two sets of robotic applications: Service robots and lethal robots.

Service robots are created to peacefully live and interact with humans, whereas lethal robots are created to be sent to fight in the battlefield as military robots.

According to The Ethical Landscape of Robotics, Noel Shanky argues that "the cognitive capabilities of robots do not match that of humans, and thus, lethal robots are unethical as they may make mistakes more easily than humans." And Ronald Arkin believes that "although an unmanned system will not be able to perfectly behave in battlefield, it can perform more ethically that human beings."

In part, the question about the morality of using robots in the battlefield involves commitments on the capability and type of Artificial Intelligence in question.

Robots in the military: Designed to kill and moral accountability
Military robots are certainly not just a thing of the present. They date back to World War II and the Cold War. The German Goliath tracked mines and the Soviet teletanks. Military robots can be used to fire a gun, disarm bombs, carry wounded soldiers, detect mines, fire missiles, fly, and so on.

Today, many other uses for military robots are being developed applying other technologies to robotics. The U.S. military is going to count with a fifth of its combat units fully automated by 2020.

What kind of roboethics are going to be embedded to military robots and who is going to decide upon them? Asimov's laws cannot be applied to robots that are designed to kill humans.

Also in 2020, the U.S. army is going to live test armored robotic vehicles. A demonstration was held in May in Texas.

Roboethics will become increasingly important as we enter an era where more advanced and sophisticated robots as well as Artificial General Intelligence (AGI) are going to become an integral part of our daily life.

Therefore, the debate in ethical and social issues in advanced robotics must become increasingly important. The current growth of robotics and the rapid developments in Artificial Intelligence require roboticists and humans in general to be prepared sooner rather than later.

As the discussion in roboethics advances, some argue that robots will contribute to building a better world. Some others argue that robots are incapable of being moral agents and should not be designed with embedded moral-decision making capabilities.

Finally, perhaps not yet but in the future robots could become moral agents with attributed moral responsibility. Until then, engineers and designers of robots must assume responsibility regarding the ethical consequences of their creations.

In other words, engineers and designers of robots must be morally accountable for what they design and bring out into the world.

Roboethics: The Human Ethics Applied to Robots
 

Indecisi0n

Well-Known Member
Even though there are many hurdles before we see true full mass AI their progress is nothing short of astounding.
When I see videos like this I can't help but think we are all doomed ! lol

 

BadIdeaGuy

Moderator
Staff member
Even though there are many hurdles before we see true full mass AI their progress is nothing short of astounding.
When I see videos like this I can't help but think we are all doomed ! lol


Absolutely is amazing progress.

But I'm not worried.
The thing you've got to keep in mind is that AI =/= AGI. (Artificial General Intelligence)
We can program a robot to do somersaults... That's easy.
We can program them to learn our speech patterns.
Or do a million other things.

But that is all dependent upon a computer programmer spending the time to code the rules, and goals of individual tasks.

But they have so far completely failed to create a machine capable of learning how to accomplish new tasks.

This means that the computer is only capable of responding to situations appropriately where the rules have been set forth already by a human.
And there are millions of different variables that prevent anything resembling current AI from being any sort of a threat.

Imagine a robotic package car driver for last mile deliveries.
The programmer will have to implement specific routines for:
What happens if the robot is overturned. (Self righting maneuvers.)
What happens if the robot runs low on power.
What happens if a package breaks open.
What happens if a dog attacks it.
What happens if the delivery is to a gated community.
What happens if a servo stops responding.
What happens if the address turns out to be nonexistent.

And so on, and so forth.

An AGI system would be trainable, similar to a new hire. The supervisors would teach the robot how to handle such things, and it would be able to generalize the knowledge, and use reason and planning to execute the orders.

We don't have any such system, or anything similar. The cognitive abilities have not been built into any AI system I've ever seen, be it built off neural nets, or otherwise.

So we would have to write specific routines for the issues I outlined above, and a million more that I can't even think of, to handle each one individually.
And as soon as that robot was given a situation it was not instructed how to handle, it would completely fail.

Sorry for the essay. I love most anything to do with AI, but I think a lot of people mistake it for a threat when it really isn't anywhere close.

Would love to get my hands on one of those Atlas models. :)
 

Indecisi0n

Well-Known Member
Absolutely is amazing progress.

But I'm not worried.
The thing you've got to keep in mind is that AI =/= AGI. (Artificial General Intelligence)
We can program a robot to do somersaults... That's easy.
We can program them to learn our speech patterns.
Or do a million other things.

But that is all dependent upon a computer programmer spending the time to code the rules, and goals of individual tasks.

But they have so far completely failed to create a machine capable of learning how to accomplish new tasks.

This means that the computer is only capable of responding to situations appropriately where the rules have been set forth already by a human.
And there are millions of different variables that prevent anything resembling current AI from being any sort of a threat.

Imagine a robotic package car driver for last mile deliveries.
The programmer will have to implement specific routines for:
What happens if the robot is overturned. (Self righting maneuvers.)
What happens if the robot runs low on power.
What happens if a package breaks open.
What happens if a dog attacks it.
What happens if the delivery is to a gated community.
What happens if a servo stops responding.
What happens if the address turns out to be nonexistent.

And so on, and so forth.

An AGI system would be trainable, similar to a new hire. The supervisors would teach the robot how to handle such things, and it would be able to generalize the knowledge, and use reason and planning to execute the orders.

We don't have any such system, or anything similar. The cognitive abilities have not been built into any AI system I've ever seen, be it built off neural nets, or otherwise.

So we would have to write specific routines for the issues I outlined above, and a million more that I can't even think of, to handle each one individually.
And as soon as that robot was given a situation it was not instructed how to handle, it would completely fail.

Sorry for the essay. I love most anything to do with AI, but I think a lot of people mistake it for a threat when it really isn't anywhere close.

Would love to get my hands on one of those Atlas models. :)
I understand this as well which is why I am not worried about it. It wont happen in my lifetime or at least not during my working career. I have to say its very entertaining watching all these drivers freaking out about it. There was pandemonium when they introduced drones which to me was just comical.
 

Indecisi0n

Well-Known Member
Absolutely is amazing progress.

But I'm not worried.
The thing you've got to keep in mind is that AI =/= AGI. (Artificial General Intelligence)
We can program a robot to do somersaults... That's easy.
We can program them to learn our speech patterns.
Or do a million other things.

But that is all dependent upon a computer programmer spending the time to code the rules, and goals of individual tasks.

But they have so far completely failed to create a machine capable of learning how to accomplish new tasks.

This means that the computer is only capable of responding to situations appropriately where the rules have been set forth already by a human.
And there are millions of different variables that prevent anything resembling current AI from being any sort of a threat.

Imagine a robotic package car driver for last mile deliveries.
The programmer will have to implement specific routines for:
What happens if the robot is overturned. (Self righting maneuvers.)
What happens if the robot runs low on power.
What happens if a package breaks open.
What happens if a dog attacks it.
What happens if the delivery is to a gated community.
What happens if a servo stops responding.
What happens if the address turns out to be nonexistent.

And so on, and so forth.

An AGI system would be trainable, similar to a new hire. The supervisors would teach the robot how to handle such things, and it would be able to generalize the knowledge, and use reason and planning to execute the orders.

We don't have any such system, or anything similar. The cognitive abilities have not been built into any AI system I've ever seen, be it built off neural nets, or otherwise.

So we would have to write specific routines for the issues I outlined above, and a million more that I can't even think of, to handle each one individually.
And as soon as that robot was given a situation it was not instructed how to handle, it would completely fail.

Sorry for the essay. I love most anything to do with AI, but I think a lot of people mistake it for a threat when it really isn't anywhere close.

Would love to get my hands on one of those Atlas models. :)
No need to apologize for the essay. It's comforting knowing that others out there get it as well. I have always loved tech and followed it pretty closely. I just saw this today for the first time and was pretty amazed:
. It also works with a smart phones caller ID and puts in the watch in braille who is calling which is pretty damn cool.
 

Brownslave688

You want a toe? I can get you a toe.
Absolutely is amazing progress.

But I'm not worried.
The thing you've got to keep in mind is that AI =/= AGI. (Artificial General Intelligence)
We can program a robot to do somersaults... That's easy.
We can program them to learn our speech patterns.
Or do a million other things.

But that is all dependent upon a computer programmer spending the time to code the rules, and goals of individual tasks.

But they have so far completely failed to create a machine capable of learning how to accomplish new tasks.

This means that the computer is only capable of responding to situations appropriately where the rules have been set forth already by a human.
And there are millions of different variables that prevent anything resembling current AI from being any sort of a threat.

Imagine a robotic package car driver for last mile deliveries.
The programmer will have to implement specific routines for:
What happens if the robot is overturned. (Self righting maneuvers.)
What happens if the robot runs low on power.
What happens if a package breaks open.
What happens if a dog attacks it.
What happens if the delivery is to a gated community.
What happens if a servo stops responding.
What happens if the address turns out to be nonexistent.

And so on, and so forth.

An AGI system would be trainable, similar to a new hire. The supervisors would teach the robot how to handle such things, and it would be able to generalize the knowledge, and use reason and planning to execute the orders.

We don't have any such system, or anything similar. The cognitive abilities have not been built into any AI system I've ever seen, be it built off neural nets, or otherwise.

So we would have to write specific routines for the issues I outlined above, and a million more that I can't even think of, to handle each one individually.
And as soon as that robot was given a situation it was not instructed how to handle, it would completely fail.

Sorry for the essay. I love most anything to do with AI, but I think a lot of people mistake it for a threat when it really isn't anywhere close.

Would love to get my hands on one of those Atlas models. :)
Once quantum computing hits we could see these changes happen very fast.
 

Indecisi0n

Well-Known Member
Assuming a robots gender? I thought you were more woke than that bruh
I didn't see these

ALLOY20-alloy28-Alloy-59-screw-nut-washer.jpg
 

trickpony1

Well-Known Member
Will the robots be programmed to respond to pressure from their ORS, center manager, the Board of Directors and the man behind the curtain in the Ivory Towers of Atlanta?
 
Top