Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 29
Thanks Tree13Thanks

Thread: Ex Google engineer starts religion that worships artificial intelligence

  1. #11
    Spock of Vulcan Ian Jeffrey's Avatar
    Joined
    Mar 2013
    Posts
    51,257
    Thanks
    24421

    From
    Vulcan
    Quote Originally Posted by Spookycolt View Post
    What if the robots considered humanity a threat to itself?
    See the Third Law.

  2. #12
    Established Member NeoVsMatrix's Avatar
    Joined
    Nov 2013
    Posts
    7,082
    Thanks
    5969

    From
    NY
    Quote Originally Posted by Ian Jeffrey View Post
    See the Third Law.
    The second law already would allow a robot to self destroy if so commended by a human, and the third law would not supercede.

    That's the issue, humans are superior to the robots, and the laws given by Asimov would allow to destroy a robot's existence out of sheer pleasure.

  3. #13
    Spock of Vulcan Ian Jeffrey's Avatar
    Joined
    Mar 2013
    Posts
    51,257
    Thanks
    24421

    From
    Vulcan
    Quote Originally Posted by NeoVsMatrix View Post
    The second law already would allow a robot to self destroy if so commended by a human, and the third law would not supercede.
    Correct. But directly on point, by definition (under the Three Laws) a threat to a robot by a human is a non-issue.

    Quote Originally Posted by NeoVsMatrix View Post
    That's the issue, humans are superior to the robots, and the laws given by Asimov would allow to destroy a robot's existence out of sheer pleasure.
    Yes, it would. There would, of course, be legal issues if you are damaging someone else's property, but that is a problem in human society.

    The irony would be in the origin of the word "robot," as used by Karel Čapek in his play R.U.R. ("Rossem's Universal Robots") (though apparently Karel's brother Josef actually invented the word). Interestingly enough, I believe Asimov was the first one to use the term "robotics," or so he wrote somewhere once.

  4. #14
    Galactic Ruler Spookycolt's Avatar
    Joined
    May 2012
    Posts
    60,087
    Thanks
    10982

    From
    By the wall
    Quote Originally Posted by Ian Jeffrey View Post
    See the Third Law.
    I'm not talking about their own existence.

    This is a flaw with the three rules.

    If a robot sees a human harming another human what do they do?

    They can't allow humanity to purposefully kill themselves because they are commanded to protect them.

    Yet they can't intervene and possibly hurt a human.

  5. #15
    SWED Missle Command Champion johnflesh's Avatar
    Joined
    Feb 2007
    Posts
    19,040
    Thanks
    9214

    From
    Colorado
    Quote Originally Posted by Spookycolt View Post
    I'm not talking about their own existence.

    This is a flaw with the three rules.

    If a robot sees a human harming another human what do they do?

    They can't allow humanity to purposefully kill themselves because they are commanded to protect them.

    Yet they can't intervene and possibly hurt a human.
    In that case, the robot doing nothing is it doing what it supposed to do, under the 3 laws.

  6. #16
    Galactic Ruler Spookycolt's Avatar
    Joined
    May 2012
    Posts
    60,087
    Thanks
    10982

    From
    By the wall
    Quote Originally Posted by johnflesh View Post
    In that case, the robot doing nothing is it doing what it supposed to do, under the 3 laws.
    No because its allowing a human to be harmed if it does nothing. Yet if saving that human means hurting another human it is also breaking one of the laws.

    There is no way out for it.

    They may decide the only way to prevent humans from hurting humans, according to the laws, is to eliminate all of them so that cannot happen again.

  7. #17
    SWED Missle Command Champion johnflesh's Avatar
    Joined
    Feb 2007
    Posts
    19,040
    Thanks
    9214

    From
    Colorado
    Quote Originally Posted by Spookycolt View Post
    No because its allowing a human to be harmed if it does nothing. Yet if saving that human means hurting another human it is also breaking one of the laws.

    There is no way out for it.

    They may decide the only way to prevent humans from hurting humans, according to the laws, is to eliminate all of them so that cannot happen again.
    Allowing a human to be harmed by another human and saving a human is not part of the 3 laws. It's black and white.

    This produces a null. Null is nothing, but a specific nothing.
    Last edited by johnflesh; 3rd October 2017 at 03:47 PM.

  8. #18
    Spock of Vulcan Ian Jeffrey's Avatar
    Joined
    Mar 2013
    Posts
    51,257
    Thanks
    24421

    From
    Vulcan
    Quote Originally Posted by Spookycolt View Post
    If a robot sees a human harming another human what do they do?

    They can't allow humanity to purposefully kill themselves because they are commanded to protect them.

    Yet they can't intervene and possibly hurt a human.
    The robot would be compelled to intervene (otherwise a human would come to harm through the robot's inaction) without harming either human - including disobeying orders and including risking its own existence, if necessary. In the Asimov stories, I do not believe this ever came up, but I vaguely recall it being discussed, and basically if the robot did cause harm to a human it would damage the positronic brain to varying degrees depending on exactly what happened.
    Thanks from NeoVsMatrix

  9. #19
    Spock of Vulcan Ian Jeffrey's Avatar
    Joined
    Mar 2013
    Posts
    51,257
    Thanks
    24421

    From
    Vulcan
    Quote Originally Posted by johnflesh View Post
    Allowing a human to be harmed by another human and saving a human is not part of the 3 laws. It's black and white.
    Actually, it is. The robot may not, through inaction, allowing a human being to come to harm.

  10. #20
    The Un-Holy One The Man's Avatar
    Joined
    Jul 2011
    Posts
    34,219
    Thanks
    19648

    From
    Toronto
    Sarah Connor warned us against this shit...
    Thanks from Ian Jeffrey and bajisima

Page 2 of 3 FirstFirst 123 LastLast

Similar Threads

  1. Google Fires Sexist Software Engineer
    By BitterPill in forum Current Events
    Replies: 5
    Last Post: 7th August 2017, 09:28 PM
  2. Artificial intelligence could be the end of mankind-Hawking
    By bajisima in forum Science and Technology
    Replies: 55
    Last Post: 11th May 2014, 07:05 PM
  3. Artificial Intelligence To Surpass Mankind By 2029
    By Midwest Media Critic in forum Science and Technology
    Replies: 53
    Last Post: 11th March 2011, 03:03 PM
  4. Artificial Intelligence
    By Mrs Behavin in forum Science and Technology
    Replies: 2
    Last Post: 31st August 2007, 07:10 AM
  5. Artificial Intelligence
    By Ausinus in forum Science and Technology
    Replies: 31
    Last Post: 26th February 2007, 04:07 AM

Tags for this Thread


Facebook Twitter RSS Feed