That was a popular slogan for peace demonstrators of the Vietnam era (including me).
It might be repeated, with a slight revision, at some point during this century:
Military robots already have been deployed by the United States in the occupation of Iraq, and in growing numbers, as this recent article in The New Atlantis makes clear:
Not only are the quantities of robots increasing, but the varieties of their usage and capabilities are also expanding. Again, from The New Atlantis:
As one robotics executive put it at a demonstration of new military prototypes a couple of years ago, “The robots you are seeing here today I like to think of as the Model T. These are not what you are going to see when they are actually deployed in the field. We are seeing the very first stages of this technology.”
And just as the Model T exploded on the scene—selling only 239 cars in its first year and over one million a decade later—the demand for robotic warriors is growing very rapidly.
Most of the military robots currently in use are limited to surveillance purposes. A few are equipped for killing and have been used that way, but those are still in the minority.
This article (subscription required to read online), from "The Annals of Technology" in The New Yorker, describes rapid progress in the development of weaponized military robots.
The author observes demonstrations of robotic automated fighting machines on treads that can climb stairs, use on-board video to ascertain targets, and accurately fire five shotgun rounds per second with almost no recoil; similar robot warriors are mounted on small remote-control helicopters capable of flying even in strong winds, and aiming at and striking targets with deadly accuracy. Jerry Baber, a private designer of machine weapons profiled in the article, says he is also working on a ground robot that could fight its way into an enemy-held building and then deploy six smaller robots for individual combat operations.
So far, few of these advanced systems have been deployed, partly due to ethical questions, partly due to cost, but mostly, I suspect, because there are still fears to overcome about what happens if something goes badly wrong.
When asked about such worries, U.S. military spokespersons are quick to point out their policy of maintaining "Man in the loop." In theory, a human decision is required before robot warriors take human lives. In practice, it may not always work that way -- and it's not hard to project a time when so many robots are in the field that the number and pace of decisions to be made are beyond human ability to keep up.
P. W. Singer, the author of Wired for War, says:
Meanwhile, according to the Times Online:
A rich variety of scenarios outlining the ethical, legal, social and political issues posed as robot technology improves are covered in the report. How do we protect our robot armies against terrorist hackers or software malfunction? Who is to blame if a robot goes berserk in a crowd of civilians – the robot, its programmer, or the U.S. President? Should the robots have a “suicide switch” and should they be programmed to preserve their lives?
Any sense of haste among designers may have been heightened by a US congressional mandate that by 2010 a third of all operational “deep-strike” aircraft must be unmanned, and that by 2015 one third of all ground combat vehicles must be unmanned.
We're proud to note that the lead author of this report, provided for the U.S. Office of Naval Research, is Dr. Patrick Lin, a member of CRN's Global Task Force on Implications and Policy.
Online reaction to Pat's important report, described by The Times as "the first serious work of its kind on military robot ethics," has been interesting to follow, especially as it takes thinkers beyond the usual questions and into deeper territory.
Nicholas Carr, author of The Big Switch: Rewiring the World, From Edison to Google, comments about the report on his blog:
"Robots," they write, "would be unaffected by the emotions, adrenaline, and stress that cause soldiers to overreact or deliberately overstep the Rules of Engagement and commit atrocities, that is to say, war crimes. We would no longer read (as many) news reports about our own soldiers brutalizing enemy combatants or foreign civilians to avenge the deaths of their brothers in arms—unlawful actions that carry a significant political cost."
Of course, this raises deeper issues, which the authors don't address: Can ethics be cleanly disassociated from emotion? Would the programming of morality into robots eventually lead, through bottom-up learning, to the emergence of a capacity for emotion as well? And would, at that point, the robots have a capacity not just for moral action but for moral choice - with all the messiness that goes with it?
Excellent points to consider. And taking matters even further, Paul Raven on the Futurismic blog says:
I’d go further still, and ask whether that capacity for emotion and moral action actually obviates the entire point of using robots to fight wars - in other words, if robots are supposed to take the positions of humans in situations we consider too dangerous to expend real people on, how close does a robot’s emotions and morality have to be to their human equivalents before it becomes immoral to use them in the same way?
These are hard questions, the kind many of us would prefer never to have to ask. But the time is near, if not now, when they will need to be answered. It is especially worrying when you consider the massive numbers and powerful destructive possibilities introduced by molecular manufacturing.
In typical dystopian scenarios, perhaps most vividly presented by the Terminator movies, these smart killing machines have turned against their human makers in all-out war.
But what if, instead, the recursively improving computer brains of robot warriors allow them to become enlightened and to see the horror of warfare for what it is -- to recognize the ridiculousness of building more and better (and more costly) machines only to command them to destroy each other?
What if they gave a robot war and nobody came?
"These are hard questions"
No. They are not. Robots or AI or whatever are NOT flesh and blood. They will never FEEL anything, only at-best give a weirdly spooky approximation of emotion through highly complex programming. I nor anyone else should feel even the slightest twinge at the use or disposal of a TOASTER, though admittedly a well programmed one.
Posted by: 1indio1 | March 07, 2009 at 09:31 PM
But that's what makes the question so interesting. If an advanced AI, embodied in a humanoid robot, has senses of sight, hearing, smell, taste, and touch that approach or even exceed human senses, those senses would in fact be exactly as real as ours.
You and I don't "see" the real world, any more than a video camera does. What happens is that we get light impressions on our retina, triggering nerve impulses that pass to our brain where a visual image is formed. We see an approximation of reality, no different from our manufactured friend.
So, if this new creation can have all of our senses, or more likely senses that greatly exceed ours, and has an open connection to the Internet, where it can access the whole world's store of digital knowledge, at speeds you and I couldn't comprehend, and if it claims to have emotions analogous to our own, then wouldn't you hesitate at least a moment before throwing it into the rubbish bin?
Posted by: Mike Treder | March 08, 2009 at 05:42 AM
Senses that greatly exceed ours? Unless what you are talking about is some sort of BIOLOGICAL entity, then NO, its is merely electronic feedback, and not EXPERIENCED at all. Merely data being interpreted, and appropriately acted upon by an intentless automaton by way of some pretty impressive programming. If you believe that to be anything else you exhibit a remarkable ignorance of a subject you rant on about constantly on this blog. TOASTER not life sir, and nope, not one moment of hesitation
Posted by: 1indio1 | March 08, 2009 at 10:39 PM
I have to agree with Indio more than Mike. Simply throwing parts in a box doesn't make a whole. Access to the Internet makes no difference. The fidelity of the senses makes almost no difference.
I don't see any particular reason why we'd want to give a military robot (or any other robot) the kind of emotional mechanisms that could experience pain in ways that raise real ethical issues.
In the end, I suspect that we will anthropomorphize our computer systems enough to give them rights long before any point where we "should" - just as we'd worry about a kid who tortured stuffed animals.
Chris
Posted by: Chris Phoenix | March 09, 2009 at 09:15 PM