Quantcast
  1. Sign up now and join over 35,000 northwest gun owners. It's quick, easy, and 100% free!

If robots kill people, can they be brought to trial?

Discussion in 'Off Topic' started by PiratePast40, Jul 2, 2015.

  1. PiratePast40

    PiratePast40 Willamette Valley Well-Known Member

    Messages:
    2,096
    Likes Received:
    2,079
    According to the headline, the robot killed the man: http://abcnews.go.com/Technology/wireStory/robot-kills-man-volkswagen-plant-germany-32163980. So do we now indict cars, guns, and knives?

    To most of us, this is a case of either sloppy writing, or is a bit tongue in cheek. Unfortunately, people like Floyd and Ginny feel this way about guns.

    So now - AND I'M JOKING WITH THIS - can we take up a collection to have one of these robots sent to our favorite people? :eek::eek::eek:
     
  2. The Heretic

    The Heretic Oregon Well-Known Member

    Messages:
    5,092
    Likes Received:
    6,864
    Robots are not sentient, they have less intelligence than most insects, and they have no rights or standing in the law.

    Despite what some "experts" in AI state, AI is a long ways from producing a computer that can think for itself. Even if we do ever get to the point where computers do have real intelligence, there is no reason that anyone would ever program an AI system with emotions that would provide motivation for an AI to intentionally cause harm to a human.
     
    rdt likes this.
  3. Chee-to

    Chee-to Oregon Well-Known Member

    Messages:
    2,288
    Likes Received:
    1,706
    ROBOT LIVES MATTER.............................:rolleyes:
     
  4. PiratePast40

    PiratePast40 Willamette Valley Well-Known Member

    Messages:
    2,096
    Likes Received:
    2,079
    Yea right:rolleyes:
     
    mjbskwim likes this.
  5. The Heretic

    The Heretic Oregon Well-Known Member

    Messages:
    5,092
    Likes Received:
    6,864
    If you want an AI to harm someone, you just tell (program) it to harm someone. Computers only know zeros and ones. If you tell a computer to do something, it will do it (if it has the capacity to to do that thing). Why would you go to the trouble of giving an AI emotions (or the AI equivalent) just so it would do something when it would do it out of simple blind obedience?