2:50 PM
Bill McKibben, author of Enough, is speaking from a remote location via teleportation, so to speak. His image is projected on a small screen behind the other speakers. It's life-size, which is supposed to make it seem as if he's here, I guess, but the problem is that it's hard for many of us to see the image. He's also not using a slide show, so basically we're just listening to a disembodied voice. To make matters worse, he is reading his talk, which tends to be harder to take in than is extemporaneous speaking.
Bill's remarks are focused on his objection to "the immortalitists." He says that instead of aiming to live forever, we should aim to live. That's a remarkably shallow platitude, in my view, as I see nothing wrong with vigorously pursuing both aims.
Now he is reporting on how humans, as a whole, are less happy today than we were 50 years ago. He claims that the answer of technoprogressives is always "more is better." But I think that's: a) a strawman, and b) confusing cause with correlation.
It's a strawman because virtually all of the futurist thinkers that I know (and I know most of the leading ones) are just as interested in living now as they are in living forever, and they are just as connected with human interests and values as they are to technological goodies.
The confusion between cause and correlation comes from the suggestion that people as a whole are less happy today because of advanced technology. There seems to be no real evidence for this.
3:15 PM
OK, McKibben is done, and now Ray Kurzweil is back at the podium to give his thoughts on and reponses to all the earlier speakers. I'm not going to try to keep up. He's covering a lot of points in a short time. Happily, you can look forward to a Singularity Summit podcast, which is supposed to be coming in a few days, and then a streaming video some time after that.
3:42 PM
The final segment of today's event is a panel discussion including all of the speakers (except Eric Drexler, who is no longer here) and an audience Q&A. The panel is being moderated by Steve Jurvetson and Peter Thiel.
Steve began by asking the panel: Could "uploading" a human mind be expected "to work" if it was attempted before we have a thorough understanding of how the brain works?
Ray's answer was a little hard to understand, but I think he said, first, that uploading is not really necessary for the technological singularity to occur, and second, a premature upload probably wouldn't be successful. But he emphasized that uploading is "a side issue."
Eliezer talked about the slow rate of change provided by evolution versus the rapid rate possible though design and development.
A note here about the problem with a huge panel (10 speakers) at the end of a long day: it tends to be unfocused and not very satisfying. I prefer to allow audience Q&A and/or speaker discussion periodically throughout an event and not saved up until the end.
Peter Thiel asked: By what year will we have strong AI, and what is the probability that it will lead to a better world?
Ray says ~2029, and 90%. Wow, that's high!
Doug Hofstadter says human-level computer intelligence by ~2100, and who knows?
Nick says maybe 2040, and perhaps 50%.
Sebastian says 2101, and good in hindsight [laughter].
Max says it all depends on what we do.
John says around 2060, but it could be much sooner if our burgeoning computer networks -- mainly spurred by Google -- awaken spontaneously.
Audience question: Will strong AI emerge or be designed?
Eliezer says the question is too simple and anyway we don't know enough yet to provide a meaningful answer.
Ray says it's too complex to emerge spontaneously and probably will be designed.
Tyler Emerson asked Bill McKibben: How can we avoid existential risks?
He said, slow down. Say enough is enough. Relinquish some technologies. He added thet he thinks the two most important inventions of the last few centuries are the concept of a protected wilderness, and the concept of nonviolent resistance [applause, including from me].
In response to another audience question, Cory referred to the technological singularity as "the Rapture of the Nerds" (crediting Ken MacLeod), and got loud applause.
Somehow while I wasn't paying attention, the discussion turned to whether we will eventually have a world government. The consensus seemed to be Yes, although as to whether or not that will be a good thing, there was less agreement.
More discussion ensued on various topics until, finally, the event concluded at 5:35 PM. Whew, it was a long day!
Mike Treder
Tags: nanotechnology nanotech nano science technology ethics weblog
Recent Comments