Home
Forums
New posts
Search forums
What's new
New posts
Latest activity
Members
Current visitors
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Install the app
Install
Home
Forums
Brown Cafe UPS Forum
Life After Brown
Roboethics: The Human Ethics Applied to Robots
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="BadIdeaGuy" data-source="post: 4214764" data-attributes="member: 73381"><p>Absolutely is amazing progress.</p><p></p><p>But I'm not worried.</p><p>The thing you've got to keep in mind is that AI =/= AGI. (Artificial General Intelligence)</p><p>We can program a robot to do somersaults... That's easy.</p><p>We can program them to learn our speech patterns.</p><p>Or do a million other things.</p><p></p><p>But that is all dependent upon a computer programmer spending the time to code the rules, and goals of individual tasks.</p><p></p><p>But they have so far completely failed to create a machine capable of learning how to accomplish new tasks.</p><p></p><p>This means that the computer is only capable of responding to situations appropriately where the rules have been set forth already by a human.</p><p>And there are millions of different variables that prevent anything resembling current AI from being any sort of a threat.</p><p></p><p>Imagine a robotic package car driver for last mile deliveries.</p><p>The programmer will have to implement specific routines for:</p><p>What happens if the robot is overturned. (Self righting maneuvers.)</p><p>What happens if the robot runs low on power.</p><p>What happens if a package breaks open.</p><p>What happens if a dog attacks it.</p><p>What happens if the delivery is to a gated community.</p><p>What happens if a servo stops responding.</p><p>What happens if the address turns out to be nonexistent.</p><p></p><p>And so on, and so forth.</p><p></p><p>An AGI system would be trainable, similar to a new hire. The supervisors would teach the robot how to handle such things, and it would be able to generalize the knowledge, and use reason and planning to execute the orders.</p><p></p><p>We don't have any such system, or anything similar. The cognitive abilities have not been built into any AI system I've ever seen, be it built off neural nets, or otherwise.</p><p></p><p>So we would have to write specific routines for the issues I outlined above, and a million more that I can't even think of, to handle each one individually.</p><p>And as soon as that robot was given a situation it was not instructed how to handle, it would completely fail.</p><p></p><p>Sorry for the essay. I love most anything to do with AI, but I think a lot of people mistake it for a threat when it really isn't anywhere close.</p><p></p><p>Would love to get my hands on one of those Atlas models. <img src="/community/styles/default/xenforo/smilies/smile.png" class="smilie" loading="lazy" alt=":)" title="Smile :)" data-shortname=":)" /></p></blockquote><p></p>
[QUOTE="BadIdeaGuy, post: 4214764, member: 73381"] Absolutely is amazing progress. But I'm not worried. The thing you've got to keep in mind is that AI =/= AGI. (Artificial General Intelligence) We can program a robot to do somersaults... That's easy. We can program them to learn our speech patterns. Or do a million other things. But that is all dependent upon a computer programmer spending the time to code the rules, and goals of individual tasks. But they have so far completely failed to create a machine capable of learning how to accomplish new tasks. This means that the computer is only capable of responding to situations appropriately where the rules have been set forth already by a human. And there are millions of different variables that prevent anything resembling current AI from being any sort of a threat. Imagine a robotic package car driver for last mile deliveries. The programmer will have to implement specific routines for: What happens if the robot is overturned. (Self righting maneuvers.) What happens if the robot runs low on power. What happens if a package breaks open. What happens if a dog attacks it. What happens if the delivery is to a gated community. What happens if a servo stops responding. What happens if the address turns out to be nonexistent. And so on, and so forth. An AGI system would be trainable, similar to a new hire. The supervisors would teach the robot how to handle such things, and it would be able to generalize the knowledge, and use reason and planning to execute the orders. We don't have any such system, or anything similar. The cognitive abilities have not been built into any AI system I've ever seen, be it built off neural nets, or otherwise. So we would have to write specific routines for the issues I outlined above, and a million more that I can't even think of, to handle each one individually. And as soon as that robot was given a situation it was not instructed how to handle, it would completely fail. Sorry for the essay. I love most anything to do with AI, but I think a lot of people mistake it for a threat when it really isn't anywhere close. Would love to get my hands on one of those Atlas models. :) [/QUOTE]
Insert quotes…
Verification
Post reply
Home
Forums
Brown Cafe UPS Forum
Life After Brown
Roboethics: The Human Ethics Applied to Robots
Top