• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« Tech Moving Ahead Fast | Main | Speech, Thought & Action »

March 15, 2008

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Nato Welch

//"in some scenarios, weapon-wielding robots could someday act more humanely than humans."//

These are the short-lived scenarios in which the robots laid down their weapons, disabled them, refused to fight.

Seriously, how does fighting with yet-more-gee-whiz technological proxies change the immorality of fighting wars in the first place?

It should be pointed out, if it hasn't been already, that there is an oft-overlooked difference between remotely-controlled robots and autonomous robots.

//If a killer robot goes haywire, who (or what) goes before the war-crimes tribunal?//

Why have too few people been asking this about Blackwater, who fights with far better resources and far less accountability than our own armed forces?

The original purpose of corporations, as "artificial persons", in the legal sense, was to deflect liability from human beings in large enterprises. As we deploy autonomous fighters, what do you want to bet they will be utilized in that same way?

Svein Ove

"This work goes way beyond science-fiction author Isaac Asimov's Three Laws of Robotics, which supposedly ruled out scenarios where robots could harm humans. . . "

I wish people would stop propagating this misunderstanding. Asimov's books weren't at all about how the three laws would prevent problems; they were about how they were insufficient and dangerously simplistic, as anyone who actually /read/ them would know.

Even the robots that were supposedly heroes fell victim to this - in later books, it turns out that the ones that invented the "zeroth law" thus managed to rationalize permanently decreasing the IQ of all of humanity - definitely preventing geniuses - for our own good.

Mike Treder, CRN

You're right, of course, Svein, that Asimov never intended the Three Laws to be a serious proposal for handling robot intelligence. They were more like a thought problem, a formula to be played with in fiction. That point is actually made elsewhere in the article that I quoted. Thanks for bringing it up.

Tom Craver

Possibly if one could make robots cheap enough and sturdy enough, they could use less lethal force to capture and restrain enemy combatants.

Taken to an extreme, a robotic "warden" might be assigned to every human being to watch and prevent them from doing violence to othes or themselves - Jack Williamson's "With Folded Hands", and "The Humanoids".

Is it more moral to allow men freedom they abuse (and to then kill them for doing wrong), or to deny them the freedom to choose to do evil? (Williamson copped out on that question - having some humans magically transcend to a new state of being, and thereby escaping the robots.)

The comments to this entry are closed.