Robots with feelings [will be available] in just 10 years, scientists predicted yesterday. They now claim it is essential to give robots their own emotions if they are to be capable of running independently and efficiently enough to take on a variety of domestic tasks.
More from a fascinating article in the Telegraph (UK):
At present, commercially available robots such as automatic vacuum cleaners are little more than drones capable of carrying out only one task. However, speaking at the American Association for the Advancement of Science in San Francisco yesterday, a panel of robotics experts said robots capable of multiple domestic tasks, that can also provide companionship for their owners, will be available within 10 years. And the scientists claim it is already possible to give robots such "feelings".A number of groups around the world are now developing robots that have basic emotions in a bid to motivate the machines.
If a robot feels happy after it has cleaned a dirty carpet particularly well, then it will apparently seek out more dirt to do the same. Similarly, if the robot feels guilt or sadness at having failed at a task, it will try harder next time.
"Emotion plays an important role in guiding attention towards what is important and away from distractions," said Professor Cynthia Breazeal, one of the world's leading roboticists based at the Massachusetts Institute of Technology. "It allows the robot to make better decisions, learn more effectively and interact more appropriately."
How much might be possible with humanoid robots? How close can we get to androids -- such as those envisioned by Isaac Asimov -- who are virtually indistinguishable from us?
Whatever is achieved, it will take many more small steps. Perhaps, however, a "grand challenge" can accelerate that progress.
This is from a special report in The Daily Yomiuri (Japan):
At times, it may seem as if technology is moving ahead at breakneck speed. But in reality, most technological and scientific innovation saunters forward in a stepwise fashion, building on past success and carefully hedged against serious risk.Grand challenges are different. By design, grand challenges are dreamed up to push the envelope, to break through barriers, and to ignore limits. Think of projects such as putting a man on the moon or mapping the human genome and you get the idea.
Grand challenges have become a favorite paradigm-shifting mechanism in many scientific and engineering disciplines ranging from mathematics and biology to psychology and astrophysics.
Typically what they all have in common is a blend of imagination and wonder -- a sense that something heretofore thought impossible might just be within reach.
In 2002, a group of British researchers set about defining a series of grand challenges. One of those they focused on is to create "a succession of increasingly sophisticated working robots."
Cognitive science, artificial intelligence (AI), and robotics, while related, have traditionally followed distinct trajectories. Cognitive science is primarily concerned with understanding the human mind, while artificial intelligence would be happy to create any type of intelligent system, humanlike or not. Robotics brings programmed action, intelligent or otherwise, into the realm of the physical.In the true spirit of a grand challenge, the Architecture of Brain & Mind project aims to bring these three disciplines together in a single demonstrable system. Along the way, researchers will have to solve a range of rather thorny problems related to natural intelligence, perception, reasoning, learning, and problem solving.
At first glance, these seem like the standard set of AI problems. However, the project distinguishes itself by including even less understood human capabilities such as curiosity, creativity, and the ability for the system to not only know what it is doing, but also the reasons for its actions (at least in some cases).
One sign of success would be a robot capable of functioning at the level of a 2- to 5-year-old child, which is no small task. Another milestone could be a robot capable of autonomously helping a disabled person around a house without explicit pre-programming about its environment. Either result would be an outstanding achievement by today's standards.
We, and others, are using a phrase (attributed to Stephen Hawking) so often now that it is beginning to seem trite, and yet never has it been more true: Today's science fiction is often tomorrow's science fact.
Nearly all of us living today will witness many things "heretofore thought impossible," including, perhaps, androids that are eerily close to humans. Other wonders could include radically extended healthy human lifespans, robust artificial intelligence, and, of course, desktop molecular manufacturing.
That last one is most likely to arrive earliest. We expect that the world will have to deal with the transformative and disruptive consequences of advanced nanotechnology no later than 2020, and quite possibly several years sooner.
Mike Treder (Hat tip: KurzweilAI.net)
Tags: nanotechnology nanotech nano science technology ethics weblog blog
Mike, is there a general consensus on the other end of that 2020 forecast for MM? You said no later than 2020, but what is the most reasonable forecast on the early side of the timeline? (Taking into account that of course we could be wrong, but if you were to try to "ballpark" it...)
Posted by: Eric | February 21, 2007 at 12:54 PM
Eric, it's hard to know. We don't even know if a molecular manufacturing program has been started already, somewhere in the world. And we can't predict events like the recent UK Ideas Factory that may move the field forward significantly all by themselves.
I wouldn't be shocked if a well-run crash program started in 2010 might do something useful by 2015. But that's a scenario, not a prediction.
("Something useful" means exponential manufacturing that's programmable and flexible enough to make a general-purpose set of nanoscale building blocks that can be combined into large products.)
Chris
Posted by: Chris Phoenix | February 21, 2007 at 06:28 PM
Eric, on our main website we say: "It might be become a reality by 2010, likely will by 2015, and almost certainly will by 2020." That's not a consensus, of course, but an informed opinion.
MM being developed by 2010 would mean that a secret program has been operating for a number of years. We don't think that's the case, but we also can't rule out the possibility.
Our statement that MM "likely" will arrive by 2015 is based on the expectation that over the next several years it will become increasingly clear how powerful and valuable the technology will be -- and so someone, somewhere, will find a way to fund a crash program.
Posted by: Mike Treder, CRN | February 22, 2007 at 06:48 AM
I've always wondered what would have happened if nanotechnology developed the way Feynman proposed in his lecture. I.e., building progressively smaller factories. Would the science and engineering have been straight forward until you reached a certain size? If so, what is that size? At what point from the other direction can we safely employ traditional mechanical engineering principles? That, to me, is the question that will answer just how fast a nanofactory can be developed.
Most of the skepticism surrounding MNT has focused on the basic chemical reactions. If and when that is dealt with, I imagine the target will move to the higher levels. From a systems integration perspective, a nanofactory *looks* like the most complex device humans have ever envisioned. What I am wondering is, will all the pieces fall into place when all the major hurtles are overcomed?
Posted by: NanoEnthusiast | February 22, 2007 at 12:26 PM
NanoEnthusiast, great question. My answer: It would have become quite difficult at the moment the parts became too small to manipulate with handheld tools. And it would have become extremely difficult at the moment the parts became too small to see with photons. I know that's not what you were asking, but I think it's an important barrier to recognize.
I'm guessing that the first re-engineering of the machines themselves would have been the need to replace electric motors with either different kinds of motors or clutches from an external drive. The second might have been either metal softness, or lubrication problems.
Skepticism around MM has focused on anything the skeptics can think of. Software programming, thermal noise, supposedly competitive alternative approaches, and as you say, chemistry. So it is already at the higher levels as well as the lower levels--and it persisted for decades in areas that had already been dealt with.
I do not think a nanofactory has to be more complex than a computer. Whether that's an IBM PC from 1982 running DOS, or a modern gaming machine running Vista, I don't know. I think it's closer to the former--at least the early versions can be. Once you have general-purpose robotics that can be externally programmed, and once you have a small set of general-purpose operations that can be combined in long sequences, then your design space becomes huge while your design task remains relatively simple.
Chris
Posted by: Chris Phoenix | February 22, 2007 at 05:58 PM