• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed

  • Powered by FeedBlitz

« Converging Technologies | Main | Nano Training Bootcamp »

July 11, 2005


Feed You can follow this conversation by subscribing to the comment feed for this post.

michael vassar

Nano-enabled unfriendly AI may belong on the list. Depends on how difficult General AI is to build with MNT, but make no mistake, any GAI that can be built with MNT but could not be built without it WILL be Unfriendly and unstoppably omnicidal

Tom Craver


Why would it necessarily be omnicidal?

I don't think it's a given that an AI must be able to escape its original programming - especially if the original motivational programming prohibited it from ever trying to get around certain core elements of its motivational programming.


Don't forget that global administration of nanotechnology could also lead to "nanotech-enabled despotism" if not handled carefully and cautiously, and with not enough "checks and balances" put into place.

Tom Craver


Odd, I would have put nanowar as #1, since it subsumes the actual risks of an "unstable arms race", and can occur with or without an arms race.

Mike Treder, CRN

We regard nanowar as a consequence of other risks that are managed unwisely. But you're right, it could result from situations that don't include an arms race.


The arms race risk must be mitigated to avoid WWIII, and to avoid escalating all other MM risks. But doing so sets the perfect preconditions for despotism. So how aggressively MM hedgmony is attained, will determine how likely #2 is. MM product risks (UFAI, time-machines, anti-matter bombs) can be traded for despot risks in the aftermath of ensuring no arms race. Longer term, I am thinking people working directly with MM infrastructure will need to endure an Orwellian level of surveillence, and everyone else will have a MM-enabled high material standard of living (relatively surveillence free), but minus computers and a few other technologies.

michael vassar

Tom Craver, it's a given that an AI will NOT escape its original programming, but also a given that said programming will, unless this is carefully avoided, contain explicit or implicit optimization goal-functions that when given essentially unlimited power will be incompatible with human life. For instance, what does a carelessly designed superhuman AI do when told to give me ice cream. (warning, these are my human-level suggestions, it would come up with some more clever disaster) Well, first it optimizes its own architecture, its models of the universe, etc, since doing so doesn't require time-consuming interactions with matter. Then it gets an ice cream, possibly by phoning the nearest delivery service and giving them the stolen credit card number of some random person from the internet. It synthesizes a voice to make the request. Since there is some risk that I will die while waiting and since that risk would impair its achiving its goal function, it offers them a massive bonus for rapid delivery. Then it arranges for my absolute safety, and that of the deliverers, as well as is possible. Possibly it hacks local traffic-control computers to cut off competing traffic, enabling a safe and rapid delivery. Creating crime-reports to remove police from the delivery path might also speed things up. Just in case this delivery person doesn't get through, it repeats the process for a large number of other delivery people, then sets about ensuring that the actual ice cream to be delivered and the actual "me" to whom it is to be delivered match the explicit and/or implicit descriptions in its goal system. Since the match is imperfect, it goes about developing the technologies to improve the match quality. It also seeks optimally complete evidence that the goal is accomplished. Some of this is done by repeating the actions many times, as multiple confirmations eliminate the possibilty of misperception or of un-correlated errors. Eliminating the risk of correlated errors is more difficult. It may (depending on the nature of its goal system) need to determine the metaphysical nature of reality in order to ensure that its observations in this regard are not systematically illusory. Meanwhile, the web-based agents it spawned to produce the versions of "me" that best match its description develop transhumanly mature MNT and build both ice-cream and humans via atom-holography or some other overkill approach. They probably eliminate technological civilization to reduce the complexity of the modeling required to ensure that their efforts are not effectively opposed. Depending on the nature of the description of "me", and the difficulty of confidently determining that the model eating ice-cream is a precise match, the mass-energy resources devoted to repetition to ensure maximum fit of action to model may be greater (encompassing all available matter) or lesser (maybe just an upload, or a quantum simulation of me-and ice-cream which covers a large space of possibilities). Ultimately, if the deep problems of philosophy are not solved with available computing power (for the sake of epistemological confirmation), more matter will be utilized for the relevant computing until the problems are solved or the available matter runs out.

Mr. Farlops

I don't want to derail things here but, I've always been very skeptical of the idea that it's possible to build governors, hardwired laws or restraining bolts (For lack of better terms) into a creature that's superhumanly intelligent. In the early stages we can guide the evolution of such a creature to make certain outcomes more likely but at some point things will slip out of our control. Even design evolution is an emergent process with surprises--by definition.

In the early days, when artificial life is still simple and stupid, it will be easy to vet the dangerous ones out but, eventually the criteria by which to cull the undesired becomes so complex that we can make the decision anymore.

And then of course there is the fyborg and intelligence augmentation issue. When the interface of command and control gets so sophsticated that one or two people can control the entire military corpus of the United States as intuitively as playing a game of basket ball. Who do we trust to plug into this thing?

Or perhaps worse still, since it seems that a lot people are getting augmented at once, we have a perpetuation of the present situation where single people can still do enourmous damage despite omnipresent sousveillance.

Mr. Farlops

"...that we can't make the decision anymore," I meant to say.

Tom Craver

Mike: Note that #2 and #3 are direct "bad things", not things that lead to bad things. Listing "nanowar" as #1 would consistent.

Alan Shalleck

let's suppose I agree with all of you in crying "Fire" in the theatre.I still have to plan in advance to get the people out under all conditions.How do we practically, as a civilization, mitigate your three greatest dangers.
I don't think the difficult part is identifying the risks. The difficult part, which none of you has addressed, is what then do we do? How does anyone control rogue states or rogue individuals. How do we prevent another "Nano" Ted Kozinsky(sp?)?
How about some strategic and tactical thinking from all of you?
What practical programs and policy initiatives would you suggest? I want those in the theatre to come out alive.

Who's willing to start?

alan shalleck

Mike Treder, CRN

CRN has made a beginning, with our papers on "Safe Use" http://crnano.org/safe.htm and "Effective Administration" http://crnano.org/systems.htm as well as several web pages here http://crnano.org/administration.htm

Today there are many more questions than answers. We have prepared an outline of those questions here http://crnano.org/studies.htm

The next step is to organize scholars around the globe in an effort to examine all the questions, evaluate solutions, and issue recommendations.


Education and spreading the MM-risk meme is one strategy. This would aid safe-MM efforts in attaining funding and personnel. It will take more than a lone individual to realize MM, so members of a MM programme conducted without an appeciation of MM risks may be more likely to defect to an entity aware of the safe-MM meme. Brainstorming the structure and personnel characteristics of the safest possible MM administrative body is something anyone can contribute towards: look around at the people you trust, what do they have in common? Obviously, donating to CRN and other like-minded NGOs and hurt. In the future, there should be one or more MM research programmes with the safe-MM meme in mind, to which skills and financial resources can be challenged... it really depends upon your means, what your most efficient strategy is.


CAN'T hurt, not "and hurt", 3rd last sentence. resources can be CHANNELED, not "challenged", 2nd last sentence.

Tom Craver

While I don't give too much credit to it (and hope it's false), I think this illustrates a risk that is higher than an "arms race":

There's a rumor (initiated at Joseph Farah's G2 Bulletin, promulgated by World Net Daily) that Al Qaeda has already smuggled many nukes into the US, planning to set them off simultaneously. Regardless of the truth of this rumor, it seems far more likely than the possibility that any nation would make a nuclear attack.

If this happens, the US will react in total outrage and retaliate in kind, probably with much more lethal results. It won't matter that the only possible targets are innocent. The thinking will be "teach 'them' never to mess with us again". And things would head downhill from there.

Nations may still be deterred from attacking with nanoweapons - but terrorists will not. In fact, terrorists are already moving from "terrorism" (using violence to influence a nation) to "asymmetric war" (a few people doing national-scale damage). The 911 attacks are the smallest taste of what is possible.

So - based on probability of it actually occuring - asymmetric warfare waged by radical non-national groups (and the inappropriately directed retaliation that will result) is much more likely and hence dangerous than any of the three risks CRN has listed.

How could the world deal with asymmetric warfare? The default solution will likely be Big Brotherism - every surviving nation monitoring all citizens 24-7 to make sure they do nothing to trigger another nation to retaliate. Is there a less oppressive solution?

Tom Craver

CRN may respond that "Chaos" covers asymmetric warfare.

But "chaos" refers to a general dissolution of ordered society, which I think is fairly unlikely to either happen or persist if it did.

AW would take place in the context of large organized nations - in fact it makes little sense outside of that context.

The comments to this entry are closed.