• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« Top Universities for Nanotech | Main | Simulating the Future »

May 17, 2006

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451db8a69e200d8345d734a69e2

Listed below are links to weblogs that reference All of a Sudden -- BOOM!!:

» Thinking Robot -- How Far Away? from BrainBasedBusiness
Mike Treder asked in his blog recently ... if a computer can ever be as smart as a human... and his answers will facinate you.... I was also interested in a recent story in The Business Online Magazine where Adam... [Read More]

» Thinking Robot -- How Far Away? from BrainBasedBusiness
Mike Treder asked in his blog recently ... if a computer can ever be as smart as a human... and his answers will facinate you and show you your brain on a microchip.... Sounds like a future blog here ...... [Read More]

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Jan-Willem Bats

I don't know of any other examples that demonstrate powerful exponential acceleration, other than the human genome project.

What others are there?

ellenweber

Mike thanks for this interesting post... what a promise this holds out for repair of the human brain and also for the possibility of robots to help humans... What do you think of the Reverb robot currently being tested for some of the processes you described so well here...?

Phillip Huggan

It's too bad there isn't a Center for Responsible AI. If you create an AI without taking the time to learn ethics at a very high level or fail to take advice from people who do understand ethics at a very high level, the goal system may malfunction and we will all be dead.

NanoEnthusiast

I agree with the basic premise of there being an inflection point in future for both AI and MNT. However, I am not convinced you can so easily map it to a timeline. It seems to me that Kurzweil has massaged the data to ensure that this will all transpire within his lifetime. His exponential exponential growth idea especially seems suspect. There is no guarantee that this will happen in a 15 to 25 year window. Though if I had to guess it i'd say 2045 for MNT and 2050 for AI.

Hal

Phillip, SIAI, www.singinst.org, is similar to what you are calling the Center for Responsible AI. They are all about trying to ensure that AI is designed with built-in ethical guidelines so that it will not have harmful effects. This is a much more difficult task than it may seem, due to the seeming impossibility of predicting actions by entities much more intelligent than us.

Mike, technically there is no such thing as a point of "exponential take-off". Hofstadter complained about Kurzweil's usage of a similar term, the "knee" in the exponential curve where it goes from mostly flat to steeply climbing. In fact, exponential curves look the same everywhere, and it's just a matter of how much you zoom and scale them that gives the illusion of two different regimes of growth.

Kurt

I think the AI people are wrong in their estimates on when we will develop real AI. For various technical reasons, this is unlikely to occur within the next 50 years.

Memory and identity in brains is based on the dendritic connections. It is based on the number and chemical type of dendrites (there are different types of dendrites based on chemistry). The critical thing here is the fact that dendritic connections are dynamic. They are continually deleted and grown, which is the basis of memory and learning.

In contrast, all existing and proposed computer architectures and hardware rely on fixed electronic interconnects that make up the ICs and what not. As far as I am aware, no computer system has been proposed that would have dynamically reconfigurable interconnects analogous to those of the human brain.

More significantly, to design such a dynamic system would require active reconfigurable elements based on some kind of MEMS or nanotechnology. Such a technology is likely to be similar in function and form as such to employ the same molecular principles as what constitutes biological systems (i.e. wet nanotechnology). The result is that your AI brain is going to be very similar to a biological brain.

Not only is this sometime away. It is unlikely to offer any significant benefits over non-AI technology. We design computers to be used as tools. They do things (like number crunching) that our brains cannot. As computers become more powerful (molecular electronics and the like) they will be more and more useful as tools, design specifically for the tasks we need done. This development path is unlikely to result in true AI.

The reason is that sentience is based on self-awareness. Maintaining a sense of self, in and of itself, requires the use of computing resources that could otherwise be used for whatever computational task you need to do. This represents a sort of "loss" or reduced capacity that does not make sense.

I think real AI is unlikely in the near-term future.

Tom Craver

Kurt:
I think most assume that reconfigurability will be done in software. Computer memory is perfectly alterable.

NanoEnthusiast

To Hal:
I agree to a point, I don't think there has to be a brief period of time (say 6mo to a 1year) where radical change will unfold. Now say you were born at the turn of last century, there was a time (more like a decade) were air travel went from a curiosity to being everywhere. Looking back at you life, you might be able to recognize that decade as a knee. The question is how much spread are we talking about for AI? I would imagine a machine finally passing the Turing test would be viewed by history as the knee point and everything after might be viewed as the singularity. Only if we continue improvements and greatly exceed human intelligence.

I am not sure it is even possible to control a super AI, at least not one with general intelligence. Functional autistic AIs, each with a limited domain only communicating with each other through humans, seems to me the only safe way to harness most of the benefits of super AI. Allowing them to coalesce into a whole would probably be more powerful, but dangerous

To Kurt:
On the subject of exotic hardware I agree with Tom.

The problem with thinking of computers as just tools is that people increasingly want their machines to act as a proxy in what has been previously, strictly human decision making. As an example, programs to automate stock trading must (if they are ever going to compete against human traders) have an internal model of human psychology. Markets have a large psychological component, as do many other human fields. Even something as simple as language translation, involve a lot of human assumptions based on context. In order to complete their tasks, computers increasingly must "ask" themselves "If I was a human in such and such situation how would I respond?" It may very well be that various degrees of self awareness are necessary for some desired applications. This is the danger.

Mike Treder, CRN

Hal, you're right, of course, that exponential curves look flat when viewed up close -- that's one of the central messages of my presentation called "Nanotechnology on an Upward Slope". But it's wrong to take that observation and reason that exponential change will not be transformative and disruptive.

Saying there is no "knee" in a smooth exponential like Moore's Law misses the point, which is that human response to rapid technological change is not at all certain to follow a smooth curve. That's why we can put a knee in the graph on this page and it's perfectly logical.

Brian Wang

I think that there might be a more productive discussion on what is possible for AI if specific capabilities were discussed. Statements like "true artificial intelligence" is a constantly moving and ill-defined target.

Even the turing test is kind of vague.

Here is a roadmap and survey of tech from 2003 in regards to ambient intelligence.
http://fiste.jrc.es/download/AmIReportFinal.pdf

Reprogrammable chips:
FPGA - ASICS - PLD
http://en.wikipedia.org/wiki/Fpga
http://news.moneycentral.msn.com/provider/providerarticle.asp?Feed=PR&Date=20060515&ID=5719331
http://www.electronicstalk.com/indexes/categorybrowseaf.html
what can they do now. What will they be able to do in future. Will their be new memory and chip architectures that could have all of their advantages and fewer disadvantages?

btw: Kurt, how would FPGA's or PLDs not be able to do what you think is required?

How about this announcement from 2004
Artificial neurons that learn:
http://www.physorg.com/news300.html

Machines can learn:
Software now is adaptive and able to learn. They can make themselves better at search or for recommendations.
http://www.isgec.org/gecco-2005/committees.html
http://www.physorg.com/news2933.html

Dynamically reconfigurability in hardware does not seem like a showstopper. If it is just done in software it just tends to be slower.

The autonomous robotic driving of cars had its breakthrough. That could take jobs from truck and bus drivers. (several million jobs)

What are the specific jobs and tasks that people are paid to do now that will be beyond AI and robotics ? How will people have to adapt to add value ?

Brian Wang

some clarifications:
Turing test not so much vague as open to questions. A test of computer generated images can fool some people easily. A Turing test could fool some people easily.

Some scores for chatbots
http://www.turinghub.com/scores.php

the Turing test prize
http://www.loebner.net/Prizef/loebner-prize.html

the best one from 2005 is Jabberwacky a learning chatbot
http://www.jabberwacky.com/j2about

Even if an advanced learning chatbot passed the Turing test where it fooled most people. It would have limited usefulness. It would have to be combined with expert systems to replace people at call centers. Call center automation may not need that capability. the automated call center does not need to fool people but just provide the right answers and solutions.

Turing test problems are discussed
http://en.wikipedia.org/wiki/Turing_test

====robots, AI, automation, productivity and impact

Cost reduction: wider impact from economically replacing more people

Productivity boosting:

Robot mobility: driving, walking, flying, being everywhere. iRobot vacuuming, drones, etc...

Endurance:

autonomous: build in more useful senses and sensors.

Adaptable:

Intelligence: Expert systems.

How much learning is needed if everything gets digitized and accessible and more and more is searchable and usable by computers.

===
The most contentious issues of self-awareness seems to be of questionable value in an AI. If you can make really good artificial advisors, agents, servants, etc... they do not need to be self-aware.

Having some common sense to add context and be able to help get correct instructions is useful. the Cyc program may help to do it.

Creativity. This also needs context. Creativity for what? There is already a program that can come up with patentable innovations using genetic algorithms. But far more innovations come from people using computers to speed research.

I think that we will make progress in understanding more and more details about intelligence. It is scientific work and we still do not know enough. Therefore, goals are not well defined. Tough to know if you have reached a goal if you do not know what the goal is. Once we do understand some aspect of it well enough then the solutions come and most people forget about it and discount it. (Chess playing computers, walking robots etc...)

Kurt

Brian,

FPGAs and PLDs do not dynamically re-configure themselves in the manner of human brains.

The other issue that has not been discussed is the various hierarchies of memories in the human brain. Dendritic connections is one form of memory storage. Long-term potentiation (LTP) is another. It is also believed that gene expression within the neurons themselves may also be another form of memory storage. Also, large groups of neurons communicate with each other by diffusion-driven chemistry in a manner completely independent of dendritic connections.

My point is that there are at least 3-4 mechanisms of memory and communication that go on in human brains, all of them being mechanisitically different than how digital computers work.

Is this necessary or relevent for creating AI? Noone knows right now. What is clear is that there is a disconnect between the researchers in neurobiology and those in AI and computer systems. I do noty believe that AI/computer people have an appreciation of the complexities of neurobiology. I do not believe that we can create AI or sentience in a machine until we have more knowledge of how neurobiology works.

This is why I believe that near-term AI is unlikely.

Brian Wang

What Kurzweil talks about though is the work that is being done that more closely matches neurobiology. Reverse engineering and then matching that complexity. so you and he agree on approach but that you do not think the progress will that as swift as he does.

Kurt

No, I definitely do not think the progress will occur as quickly as Kurzweil believes. I think AI by 2050 is possible. However, I think it will be based on "wet" nanotechnology (synthetic biology being one version of this) making it more similar, mechanistically speaking, to biological brains than nano-electronics.

The comments to this entry are closed.