A new meme is quietly developing about the danger of 'killer robots'. So far, it's still below the radar for the general public. The average person knows about the Terminator movies, et al, but is not yet aware that autonomous military robots with the ability to use lethal force are being deployed as we speak.
For now, the news and the meme are both largely confined to special interest blogs, like this one, this one, and our own.
MSNBC's Cosmic Log has also picked up the thread, writing recently about "KILLER ROBOTS ... FRIEND OR FOE?"
Thousands of robots are already on the battlefield in Iraq and Afghanistan, but what happens when you hand the robot a gun and turn it loose?Some researchers fear that giving military robots autonomy as well as ammo is the first step toward a "Terminator"-style nightmare, while others suggest that in some scenarios, weapon-wielding robots could someday act more humanely than humans.
Alan Boyle, the author of that blog, takes the question a step further by speculating about efforts toward the creation of "battlebots with a conscience."
This work goes way beyond science-fiction author Isaac Asimov's Three Laws of Robotics, which supposedly ruled out scenarios where robots could harm humans. . .Even without the Three Laws, there's plenty in today's debate over battlefield robotics to keep novelists and philosophers busy: Is it immoral to wage robotic war on humans? How many civilian casualties are acceptable when a robot is doing the fighting? If a killer robot goes haywire, who (or what) goes before the war-crimes tribunal?
With the news this week about the potential for building a brain-like molecular computer that could pack all the computing power of a human brain inside a two-inch sphere, the need for assessing how to deal with intelligent autonomous robots seems more acute than ever.
Tags: nanotechnology nanotech nano science technology ethics blog
//"in some scenarios, weapon-wielding robots could someday act more humanely than humans."//
These are the short-lived scenarios in which the robots laid down their weapons, disabled them, refused to fight.
Seriously, how does fighting with yet-more-gee-whiz technological proxies change the immorality of fighting wars in the first place?
It should be pointed out, if it hasn't been already, that there is an oft-overlooked difference between remotely-controlled robots and autonomous robots.
//If a killer robot goes haywire, who (or what) goes before the war-crimes tribunal?//
Why have too few people been asking this about Blackwater, who fights with far better resources and far less accountability than our own armed forces?
The original purpose of corporations, as "artificial persons", in the legal sense, was to deflect liability from human beings in large enterprises. As we deploy autonomous fighters, what do you want to bet they will be utilized in that same way?
Posted by: Nato Welch | March 16, 2008 at 05:46 PM
"This work goes way beyond science-fiction author Isaac Asimov's Three Laws of Robotics, which supposedly ruled out scenarios where robots could harm humans. . . "
I wish people would stop propagating this misunderstanding. Asimov's books weren't at all about how the three laws would prevent problems; they were about how they were insufficient and dangerously simplistic, as anyone who actually /read/ them would know.
Even the robots that were supposedly heroes fell victim to this - in later books, it turns out that the ones that invented the "zeroth law" thus managed to rationalize permanently decreasing the IQ of all of humanity - definitely preventing geniuses - for our own good.
Posted by: Svein Ove | March 17, 2008 at 06:08 PM
You're right, of course, Svein, that Asimov never intended the Three Laws to be a serious proposal for handling robot intelligence. They were more like a thought problem, a formula to be played with in fiction. That point is actually made elsewhere in the article that I quoted. Thanks for bringing it up.
Posted by: Mike Treder, CRN | March 17, 2008 at 08:07 PM
Possibly if one could make robots cheap enough and sturdy enough, they could use less lethal force to capture and restrain enemy combatants.
Taken to an extreme, a robotic "warden" might be assigned to every human being to watch and prevent them from doing violence to othes or themselves - Jack Williamson's "With Folded Hands", and "The Humanoids".
Is it more moral to allow men freedom they abuse (and to then kill them for doing wrong), or to deny them the freedom to choose to do evil? (Williamson copped out on that question - having some humans magically transcend to a new state of being, and thereby escaping the robots.)
Posted by: Tom Craver | March 18, 2008 at 12:58 AM