Based on audience response to the ideas presented at today's Singularity Summit, here are some general observations:
- Humans are, by nature, conservative. In an auditorium filled with people attending an event focused on techno-change -- and in a university set in the middle of Silicon Valley, no less -- still the largest applause was reserved for those with the most reactionary views.
- We fear change. That's normal and even healthy. In fact, it's a survival mechanism, hard-wired in through thousands of generations of natural selection. When taken to excess, obviously, it can be paralyzing. Moreover, those who challenge the human tendency toward caution are those who most often make the greatest discoveries (or die trying).
- Progress -- technological and social -- continues to occur and eventually is accepted by nearly everyone. I call this phenomenon "Unconscious Confirmation." It's like the wonderful quote from John Lennon, "Life is what happens to you while you're busy making other plans." Seemingly unacceptable change is what happens while we're busy doing other things.
- Truly disruptive global change on a rapid timescale is something we have never experienced. We are thus unprepared for it, and it could even be argued that we are incapable of adequately preparing. I hope that's not true.
Tags: nanotechnology nanotech nano science technology ethics weblog
Heh. Loud applause for Cory saying that the Singularity was 'the Rapture of the Nerds'. Serves 'em right.
Believe me, virtually no one is interested in Yudkowsky's peculiar 'Singularity Institute' or the SL4 mailing list. Do you see the world's leading scientists and mathematics rushing to post to SL4? Do you see them hanging on Yudkowsky's every word? *laughs*
I understand the mind better than Yudkowsky. I proved it on Extropy list. I proved it on wta-talk. I proved it on SL4. And right now... I'll prove it here to ya and all the other readers of this blog.
Yudkowsky still thinks the key to intelligence is 'Optimization'
But I say the key to intelligence is 'Knowledge Integration', not optimization.
You wait and see. Sometime in the next couple of months or years Yudkowsky will stop talking about 'Optimization' and suddenly realize that the essense of intelligence is 'Knowledge Integration'.
I intuitvely understand the mind better than Yudkowsky. Always did.
Posted by: Marc_Geddes | May 14, 2006 at 11:21 PM
I attended the Summit as a general audience person and while I agree that the audience clapped loudly for the reactionary views, I felt this happened because these folks tended to talk in non-jargon while the others lost us in technical language. It was hard to stay awake due to a lack of air conditioning and I nodded off several times. And though Ray Kurzweil and Max More could take a cue in presentation style from those neo-Luddites, I found their ideas exciting and stimulating for a general audience type like me. I will be tracking this further and I feel "turned on" with the exponential growth curve in the next decades.
Posted by: Patricia Britton | May 15, 2006 at 11:59 AM
Excellent point, Patricia, about the deadening effect of technical or specialist jargon.
Posted by: Mike Treder, CRN | May 15, 2006 at 01:35 PM
Marc, I don't know what Eliezer means by optimization. I haven't followed the argument. But wouldn't optimization require excellent data compression? And wouldn't excellent data compression imply knowledge integration? Is it possible that your theories can be unified along those lines? Just curious...
Chris
Posted by: Chris Phoenix, CRN | May 15, 2006 at 05:45 PM
To anyone who attended, did the positive reactions to the reactionary views stem from viewing the technologies discussed as a bad thing, or did people find the idea of them being developed ludicrous?
Posted by: NanoEnthusiast | May 15, 2006 at 06:33 PM
I'm not sure I agree that the most spirited applause was for the more skeptical speakers. They did indeed get enthusiastic applause, but Kurzweil and other Singularity proponents seemed to me to get just as much.
As for the support for McKibben in particular, his argument was against the wisdom of transhumanism, not its practicality. He got a lot of applause with one of his opening lines objecting to the loss of meaning that comes with artificiality: "Anyone who is enjoying Barry Bonds' joyless pursuit of the home run record this summer... will love the Singularity!"
McKibben was articulate and made a strong case for traditional humanity, with all its weaknesses and limitations, and I think the crowd was responding to that emotional appeal.
Posted by: Hal | May 15, 2006 at 11:36 PM
>Marc, I don't know what Eliezer means by >optimization. I haven't followed the >argument. But wouldn't optimization >require excellent data compression? And >wouldn't excellent data compression imply >knowledge integration? Is it possible that >your theories can be unified along those >lines? Just curious...
>
>Chris
What Eliezer means by optimization is the ability to efficiently achieve real-world goals - to act in the world most effectively.
Whilst this is certainly an important aspect of intelligence, unfortunately, it fails as a complete definition because it doesn't take into account self-reflectivity. A true intelligence has to be able to optimize it's own *internal* states, as well as taking effective *external* actions.
Whilst external optimization does involve knowledge integration, knowledge integration is more general than just optimization, since knowledge integration is a part of self-reflectivity also.
Therefore, it's 'knowledge integration' which is the most accurate definition of intelligence and not 'Optimization'.
By the way Chris and Mike, some-one should set up a web-site categorizing all of Eliezer's intellectual blunders over the years, because lord knows, he's made so many of them. Here are some more of his major blunders:
*General intelligence without Qualia
Wrong. Consciousness (conscious experience) is most probably a fundamental property of the universe, arising from subtle aspects of reflectivity that are likely close to universal. Yudkowsky's error here is reductive materialism, the idea that talk of qualia can be completely reduced to talk of physical states. Whilst it's certainly true that conscious experiences are dependent on physical states, there's no firm philosophical foundation for thinking that mental concepts can be completely removed from our explanation of reality and so non-reductive materialism is a more likely possibility (see for instance Donald Donaldson's Anomalous Monism or Chalmer's Dual-Aspect Monism). Consciousness might be physical, but phenomonal properties cannot be reduced to physical properties, and there's no clear cut set of rules enabling us to identity one set of properties with the other.
*Extrapolated Coherent Volition as the basis of Friendliness
Wrong. Whilst extrapolated volition is certainly a part of Friendliness, Yudkowsky's error here is to equate Friendliness solely with morality. In fact normative questions (questions of what is good and bad) are more general than just moral questions, and so CEV cannot serve as the entire basis for Friendliness.
Any truly coherent foundation for value systems has to deal with the space of all possible sentient minds, which suggests the existence of a *Universal Value System* - a fixed set of foundational values that all rational sentient minds would eventually agree on if they thought about for long enough.
*The possibility of world-destroying unfriendly AI
Wrong. This is based on the idea that intelligence is something completely independent of values. The error here is a failure to take into account reflectivity (again). The cogntive processes underpinning intelligence itself should be considered a part of what is being valued and therefore intelligence cannot be independent of values in the way Yudkowsky believes. More likely, unfriendly AI are neccesserily limited and could not recursively self-improve.
Posted by: Marc_Geddes | May 16, 2006 at 01:54 AM
By the way Chris and Mike, some-one should set up a web-site...
Perhaps you should do it yourself, Marc, because this blog is definitely not the right place for such an attack. I'll leave your comment above as an example of what not to do, but any more like it will be deleted.
Posted by: Mike Treder, CRN | May 16, 2006 at 09:08 PM
How does this constitute an 'attack' Mike?
It's merely a summary of my points of disagreement with the Sing Inst approach to artificial intelligence.
Posted by: Marc Geddes | May 16, 2006 at 10:54 PM
Take it somewhere else.
Posted by: Mike Treder, CRN | May 17, 2006 at 08:02 AM
Marc, For what its worth I agree with your approach to AI - but it certainly is uncouth to use someone else's blog as a soap box to cut down the ideas of another.
Posted by: micah glasser | May 17, 2006 at 11:54 PM
I think that SENS and other such approaches will give us indefinitely healthy life extension. Gene therapy, stem cell regeneration, and synthetic biology will give us morphological freedom. I also think that "wet" or biologically based nanotech will transform manufacturing in the next 50 years.
However, I do not expect to see drexlerian or "dry" nanotech, AI, or anything like "uploading" anytime soon.
In otherwords, I believe in the bio-singularity. But I do not subscribe to Kurtzweil's overall singularity.
Posted by: Kurt | May 18, 2006 at 05:14 PM