• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed

  • Powered by FeedBlitz

« Yes, I'm Back From Sabbatical | Main | Modeling the future »

March 11, 2009


Feed You can follow this conversation by subscribing to the comment feed for this post.

Ryan Ellis

You're missing the fundamental problem. It's easier to attack than to prevent all possible attacks (see nuclear weapons, computer viruses, chemical and biological weapons). Even if it's harder than the video makes it out, the problem still stands that someone could create a self-replicating nanobot, eventually. If you concede there is a larger more imminent threat than terrorist use, then that proves the point doesn't it? We should be terrified of this, even if it only has a 1% chance of occurrence within 100 years (an extremely low estimate). We're facing a lot of existential risks over the next 100 years (many we haven't thought of), this is just one of the most obvious ones. If you aren't convinced of the risk, think of it this way: the Search for Extra-Terrestrial Intelligence has been unsuccessful and by most astronomers' predictions, we should have found thousands or millions of intelligent species by now, but we haven't. Yes, life could be harder to create than we expect. Yes, it could be difficult to develop the ability to use tools (not that hard though considering the human/ape example). But, we almost certainly should have found something by now. This doesn't bode well for us. Can't argue with statistics, they take all into account. I know I personally will do what I can to help think of solutions to these problems moving forward, and I hope you do as well. Don't lull yourself into a false sense of security by coming up with specific responses to specific scenarios because we're facing the overarching issue of problems usually coming before solutions.

The comments to this entry are closed.