• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« Singularity Summit LIVE! (5) | Main | The Most Dangerous Day »

May 13, 2006

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Marc_Geddes

Heh. Loud applause for Cory saying that the Singularity was 'the Rapture of the Nerds'. Serves 'em right.

Believe me, virtually no one is interested in Yudkowsky's peculiar 'Singularity Institute' or the SL4 mailing list. Do you see the world's leading scientists and mathematics rushing to post to SL4? Do you see them hanging on Yudkowsky's every word? *laughs*

I understand the mind better than Yudkowsky. I proved it on Extropy list. I proved it on wta-talk. I proved it on SL4. And right now... I'll prove it here to ya and all the other readers of this blog.

Yudkowsky still thinks the key to intelligence is 'Optimization'

But I say the key to intelligence is 'Knowledge Integration', not optimization.

You wait and see. Sometime in the next couple of months or years Yudkowsky will stop talking about 'Optimization' and suddenly realize that the essense of intelligence is 'Knowledge Integration'.

I intuitvely understand the mind better than Yudkowsky. Always did.

Patricia Britton

I attended the Summit as a general audience person and while I agree that the audience clapped loudly for the reactionary views, I felt this happened because these folks tended to talk in non-jargon while the others lost us in technical language. It was hard to stay awake due to a lack of air conditioning and I nodded off several times. And though Ray Kurzweil and Max More could take a cue in presentation style from those neo-Luddites, I found their ideas exciting and stimulating for a general audience type like me. I will be tracking this further and I feel "turned on" with the exponential growth curve in the next decades.

Mike Treder, CRN

Excellent point, Patricia, about the deadening effect of technical or specialist jargon.

Chris Phoenix, CRN

Marc, I don't know what Eliezer means by optimization. I haven't followed the argument. But wouldn't optimization require excellent data compression? And wouldn't excellent data compression imply knowledge integration? Is it possible that your theories can be unified along those lines? Just curious...

Chris

NanoEnthusiast

To anyone who attended, did the positive reactions to the reactionary views stem from viewing the technologies discussed as a bad thing, or did people find the idea of them being developed ludicrous?

Hal

I'm not sure I agree that the most spirited applause was for the more skeptical speakers. They did indeed get enthusiastic applause, but Kurzweil and other Singularity proponents seemed to me to get just as much.

As for the support for McKibben in particular, his argument was against the wisdom of transhumanism, not its practicality. He got a lot of applause with one of his opening lines objecting to the loss of meaning that comes with artificiality: "Anyone who is enjoying Barry Bonds' joyless pursuit of the home run record this summer... will love the Singularity!"

McKibben was articulate and made a strong case for traditional humanity, with all its weaknesses and limitations, and I think the crowd was responding to that emotional appeal.

Marc_Geddes

>Marc, I don't know what Eliezer means by >optimization. I haven't followed the >argument. But wouldn't optimization >require excellent data compression? And >wouldn't excellent data compression imply >knowledge integration? Is it possible that >your theories can be unified along those >lines? Just curious...
>
>Chris

What Eliezer means by optimization is the ability to efficiently achieve real-world goals - to act in the world most effectively.

Whilst this is certainly an important aspect of intelligence, unfortunately, it fails as a complete definition because it doesn't take into account self-reflectivity. A true intelligence has to be able to optimize it's own *internal* states, as well as taking effective *external* actions.

Whilst external optimization does involve knowledge integration, knowledge integration is more general than just optimization, since knowledge integration is a part of self-reflectivity also.

Therefore, it's 'knowledge integration' which is the most accurate definition of intelligence and not 'Optimization'.

By the way Chris and Mike, some-one should set up a web-site categorizing all of Eliezer's intellectual blunders over the years, because lord knows, he's made so many of them. Here are some more of his major blunders:

*General intelligence without Qualia

Wrong. Consciousness (conscious experience) is most probably a fundamental property of the universe, arising from subtle aspects of reflectivity that are likely close to universal. Yudkowsky's error here is reductive materialism, the idea that talk of qualia can be completely reduced to talk of physical states. Whilst it's certainly true that conscious experiences are dependent on physical states, there's no firm philosophical foundation for thinking that mental concepts can be completely removed from our explanation of reality and so non-reductive materialism is a more likely possibility (see for instance Donald Donaldson's Anomalous Monism or Chalmer's Dual-Aspect Monism). Consciousness might be physical, but phenomonal properties cannot be reduced to physical properties, and there's no clear cut set of rules enabling us to identity one set of properties with the other.


*Extrapolated Coherent Volition as the basis of Friendliness

Wrong. Whilst extrapolated volition is certainly a part of Friendliness, Yudkowsky's error here is to equate Friendliness solely with morality. In fact normative questions (questions of what is good and bad) are more general than just moral questions, and so CEV cannot serve as the entire basis for Friendliness.

Any truly coherent foundation for value systems has to deal with the space of all possible sentient minds, which suggests the existence of a *Universal Value System* - a fixed set of foundational values that all rational sentient minds would eventually agree on if they thought about for long enough.


*The possibility of world-destroying unfriendly AI

Wrong. This is based on the idea that intelligence is something completely independent of values. The error here is a failure to take into account reflectivity (again). The cogntive processes underpinning intelligence itself should be considered a part of what is being valued and therefore intelligence cannot be independent of values in the way Yudkowsky believes. More likely, unfriendly AI are neccesserily limited and could not recursively self-improve.

Mike Treder, CRN

By the way Chris and Mike, some-one should set up a web-site...

Perhaps you should do it yourself, Marc, because this blog is definitely not the right place for such an attack. I'll leave your comment above as an example of what not to do, but any more like it will be deleted.

Marc Geddes

How does this constitute an 'attack' Mike?

It's merely a summary of my points of disagreement with the Sing Inst approach to artificial intelligence.


Mike Treder, CRN

Take it somewhere else.

micah glasser

Marc, For what its worth I agree with your approach to AI - but it certainly is uncouth to use someone else's blog as a soap box to cut down the ideas of another.

Kurt

I think that SENS and other such approaches will give us indefinitely healthy life extension. Gene therapy, stem cell regeneration, and synthetic biology will give us morphological freedom. I also think that "wet" or biologically based nanotech will transform manufacturing in the next 50 years.

However, I do not expect to see drexlerian or "dry" nanotech, AI, or anything like "uploading" anytime soon.

In otherwords, I believe in the bio-singularity. But I do not subscribe to Kurtzweil's overall singularity.

The comments to this entry are closed.