1. Unstable Arms Race
2. Nano-enabled Despotism
3. Nano-induced Chaos
On May 27 of this year, I wrote:
An unstable arms race fueled by molecular manufacturing is the single most dangerous threat to the world in the coming decades. . . If nations begin competitive development of nanotechnology applications for military use, it's a very short step from there to an MM arms race that could spiral rapidly out of control.
That message must be repeated again and again, and not just by CRN. It is important to recognize and work toward the benefits as well, but that won't matter unless the severe risks of molecular manufacturing are handled effectively. Until we understand how profoundly nanotechnology will transform civilization, and prepare effective systems to control dangers and maximize benefits, we will not be safe.
In addition to the risk of an unstable arms race, molecular manufacturing also will provide enough power for one nation or group of people, if they have a monopoly on the technology, to completely dominate the rest of the world. That’s a second risk: nanotech-enabled despotism.
A third risk is the turmoil that could result if no controls at all are placed on technology; if every country, corporation, group, tribe, and individual has access to unlimited manufacturing, nanotech-induced chaos could leave millions dead, suffering, or oppressed.
Beyond these three major risks there are others, including economic disruption, environmental imbalance, ubiquitous intrusive surveillance, and more. But unless we take firm and deliberate steps in advance to avert the three greatest dangers, then the others may not matter. Moreover, if these serious risks are not averted, we also will forfeit the many wonderful benefits that advanced nanotechnology could bring.
Mike Treder
Tags: nanotechnology nanotech nano science technology future
Nano-enabled unfriendly AI may belong on the list. Depends on how difficult General AI is to build with MNT, but make no mistake, any GAI that can be built with MNT but could not be built without it WILL be Unfriendly and unstoppably omnicidal
Posted by: michael vassar | July 11, 2005 at 02:20 PM
Michael:
Why would it necessarily be omnicidal?
I don't think it's a given that an AI must be able to escape its original programming - especially if the original motivational programming prohibited it from ever trying to get around certain core elements of its motivational programming.
Posted by: Tom Craver | July 11, 2005 at 04:42 PM
Don't forget that global administration of nanotechnology could also lead to "nanotech-enabled despotism" if not handled carefully and cautiously, and with not enough "checks and balances" put into place.
Posted by: SonofEris | July 11, 2005 at 05:29 PM
Mike:
Odd, I would have put nanowar as #1, since it subsumes the actual risks of an "unstable arms race", and can occur with or without an arms race.
Posted by: Tom Craver | July 11, 2005 at 05:53 PM
We regard nanowar as a consequence of other risks that are managed unwisely. But you're right, it could result from situations that don't include an arms race.
Posted by: Mike Treder, CRN | July 11, 2005 at 06:03 PM
The arms race risk must be mitigated to avoid WWIII, and to avoid escalating all other MM risks. But doing so sets the perfect preconditions for despotism. So how aggressively MM hedgmony is attained, will determine how likely #2 is. MM product risks (UFAI, time-machines, anti-matter bombs) can be traded for despot risks in the aftermath of ensuring no arms race. Longer term, I am thinking people working directly with MM infrastructure will need to endure an Orwellian level of surveillence, and everyone else will have a MM-enabled high material standard of living (relatively surveillence free), but minus computers and a few other technologies.
Posted by: cdnprodigy | July 11, 2005 at 07:50 PM
Tom Craver, it's a given that an AI will NOT escape its original programming, but also a given that said programming will, unless this is carefully avoided, contain explicit or implicit optimization goal-functions that when given essentially unlimited power will be incompatible with human life. For instance, what does a carelessly designed superhuman AI do when told to give me ice cream. (warning, these are my human-level suggestions, it would come up with some more clever disaster) Well, first it optimizes its own architecture, its models of the universe, etc, since doing so doesn't require time-consuming interactions with matter. Then it gets an ice cream, possibly by phoning the nearest delivery service and giving them the stolen credit card number of some random person from the internet. It synthesizes a voice to make the request. Since there is some risk that I will die while waiting and since that risk would impair its achiving its goal function, it offers them a massive bonus for rapid delivery. Then it arranges for my absolute safety, and that of the deliverers, as well as is possible. Possibly it hacks local traffic-control computers to cut off competing traffic, enabling a safe and rapid delivery. Creating crime-reports to remove police from the delivery path might also speed things up. Just in case this delivery person doesn't get through, it repeats the process for a large number of other delivery people, then sets about ensuring that the actual ice cream to be delivered and the actual "me" to whom it is to be delivered match the explicit and/or implicit descriptions in its goal system. Since the match is imperfect, it goes about developing the technologies to improve the match quality. It also seeks optimally complete evidence that the goal is accomplished. Some of this is done by repeating the actions many times, as multiple confirmations eliminate the possibilty of misperception or of un-correlated errors. Eliminating the risk of correlated errors is more difficult. It may (depending on the nature of its goal system) need to determine the metaphysical nature of reality in order to ensure that its observations in this regard are not systematically illusory. Meanwhile, the web-based agents it spawned to produce the versions of "me" that best match its description develop transhumanly mature MNT and build both ice-cream and humans via atom-holography or some other overkill approach. They probably eliminate technological civilization to reduce the complexity of the modeling required to ensure that their efforts are not effectively opposed. Depending on the nature of the description of "me", and the difficulty of confidently determining that the model eating ice-cream is a precise match, the mass-energy resources devoted to repetition to ensure maximum fit of action to model may be greater (encompassing all available matter) or lesser (maybe just an upload, or a quantum simulation of me-and ice-cream which covers a large space of possibilities). Ultimately, if the deep problems of philosophy are not solved with available computing power (for the sake of epistemological confirmation), more matter will be utilized for the relevant computing until the problems are solved or the available matter runs out.
Posted by: michael vassar | July 11, 2005 at 09:00 PM
I don't want to derail things here but, I've always been very skeptical of the idea that it's possible to build governors, hardwired laws or restraining bolts (For lack of better terms) into a creature that's superhumanly intelligent. In the early stages we can guide the evolution of such a creature to make certain outcomes more likely but at some point things will slip out of our control. Even design evolution is an emergent process with surprises--by definition.
In the early days, when artificial life is still simple and stupid, it will be easy to vet the dangerous ones out but, eventually the criteria by which to cull the undesired becomes so complex that we can make the decision anymore.
And then of course there is the fyborg and intelligence augmentation issue. When the interface of command and control gets so sophsticated that one or two people can control the entire military corpus of the United States as intuitively as playing a game of basket ball. Who do we trust to plug into this thing?
Or perhaps worse still, since it seems that a lot people are getting augmented at once, we have a perpetuation of the present situation where single people can still do enourmous damage despite omnipresent sousveillance.
Posted by: Mr. Farlops | July 11, 2005 at 10:13 PM
"...that we can't make the decision anymore," I meant to say.
Posted by: Mr. Farlops | July 11, 2005 at 10:14 PM
Mike: Note that #2 and #3 are direct "bad things", not things that lead to bad things. Listing "nanowar" as #1 would consistent.
Posted by: Tom Craver | July 12, 2005 at 12:04 AM
let's suppose I agree with all of you in crying "Fire" in the theatre.I still have to plan in advance to get the people out under all conditions.How do we practically, as a civilization, mitigate your three greatest dangers.
I don't think the difficult part is identifying the risks. The difficult part, which none of you has addressed, is what then do we do? How does anyone control rogue states or rogue individuals. How do we prevent another "Nano" Ted Kozinsky(sp?)?
How about some strategic and tactical thinking from all of you?
What practical programs and policy initiatives would you suggest? I want those in the theatre to come out alive.
Who's willing to start?
alan shalleck
Posted by: Alan Shalleck | July 12, 2005 at 04:09 AM
CRN has made a beginning, with our papers on "Safe Use" http://crnano.org/safe.htm and "Effective Administration" http://crnano.org/systems.htm as well as several web pages here http://crnano.org/administration.htm
Today there are many more questions than answers. We have prepared an outline of those questions here http://crnano.org/studies.htm
The next step is to organize scholars around the globe in an effort to examine all the questions, evaluate solutions, and issue recommendations.
Posted by: Mike Treder, CRN | July 12, 2005 at 05:04 AM
Education and spreading the MM-risk meme is one strategy. This would aid safe-MM efforts in attaining funding and personnel. It will take more than a lone individual to realize MM, so members of a MM programme conducted without an appeciation of MM risks may be more likely to defect to an entity aware of the safe-MM meme. Brainstorming the structure and personnel characteristics of the safest possible MM administrative body is something anyone can contribute towards: look around at the people you trust, what do they have in common? Obviously, donating to CRN and other like-minded NGOs and hurt. In the future, there should be one or more MM research programmes with the safe-MM meme in mind, to which skills and financial resources can be challenged... it really depends upon your means, what your most efficient strategy is.
Posted by: cdnprodigy | July 12, 2005 at 02:01 PM
CAN'T hurt, not "and hurt", 3rd last sentence. resources can be CHANNELED, not "challenged", 2nd last sentence.
Posted by: cdnprodigy | July 12, 2005 at 02:13 PM
While I don't give too much credit to it (and hope it's false), I think this illustrates a risk that is higher than an "arms race":
There's a rumor (initiated at Joseph Farah's G2 Bulletin, promulgated by World Net Daily) that Al Qaeda has already smuggled many nukes into the US, planning to set them off simultaneously. Regardless of the truth of this rumor, it seems far more likely than the possibility that any nation would make a nuclear attack.
If this happens, the US will react in total outrage and retaliate in kind, probably with much more lethal results. It won't matter that the only possible targets are innocent. The thinking will be "teach 'them' never to mess with us again". And things would head downhill from there.
Nations may still be deterred from attacking with nanoweapons - but terrorists will not. In fact, terrorists are already moving from "terrorism" (using violence to influence a nation) to "asymmetric war" (a few people doing national-scale damage). The 911 attacks are the smallest taste of what is possible.
So - based on probability of it actually occuring - asymmetric warfare waged by radical non-national groups (and the inappropriately directed retaliation that will result) is much more likely and hence dangerous than any of the three risks CRN has listed.
How could the world deal with asymmetric warfare? The default solution will likely be Big Brotherism - every surviving nation monitoring all citizens 24-7 to make sure they do nothing to trigger another nation to retaliate. Is there a less oppressive solution?
Posted by: Tom Craver | July 15, 2005 at 03:04 PM
CRN may respond that "Chaos" covers asymmetric warfare.
But "chaos" refers to a general dissolution of ordered society, which I think is fairly unlikely to either happen or persist if it did.
AW would take place in the context of large organized nations - in fact it makes little sense outside of that context.
Posted by: Tom Craver | July 15, 2005 at 03:28 PM