• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« Death from Climate Change | Main | Searching for Evidence »

November 18, 2005

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451db8a69e200d83496907a69e2

Listed below are links to weblogs that reference Molecular Manufacturing and Proliferation:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

benji farquhar

Maybe massive abundance will begin a global golden age. Maybe fear will act as a great leveller:

"There will one day spring from the brain of science a machine or force so fearful in its potentialities, so absolutely terrifying, that even man, the fighter, who will dare torture and death in order to inflict torture and death, will be appalled, and so abandon war forever."
Thomas A. Edison

It's somehow comical to me, the thought of every person in the world having a bright red switch that could end everything.

If the day comes, everybody in the world will have all the power in the world.

Maybe by that point we will know ourselves differently. Tire of violence. Recognize ourselves as: same.

Get some drugs made by some nanotechies like the kind Huey Lewis was singing about for the itchiest trigger fingers. Chill them right out. Everybody else will be busy deciding what kind of spaceship to tour/colonize the universe in.

Chris Phoenix, CRN

Benji, back when computers were still mainframes, students would try to make them crash--which would inconvenience everyone. I read about a system that included a command that any student could run that would make the system crash. It took all the fun out of it. If I remember right, the story said that the machine was only crashed a few times this way.

Only a few is fine for a computer...

Great Edison quote, BTW. I hadn't seen that one.

Chris

Tom Craver

Arms race possibilities:
1) Arms race in progress, all sides are fairly certain their potential opponents have counter-attack capabilities that they can't counter. MAD applies, war is unlikely before the arms race slows down.

2) Arms race in progress, one side incorrectly thinks that they have an attack form that prevents any effective counter-attack. Immediate attack is likely, lest the other side develop the attack and use it. This scenario is very unlikely, as the rapid pace of nanoweapon development will create great uncertainty regarding opponent capabilities. So it poses only a slight existential risk.

3) Arms race slows to an end, all sides have devastating counter-strike capabilities - MAD applies, maybe eventually the mutual distrust will wane. Moderate existential risk over time, from errors or accidents.

4) Arms race slows to an end, one or both sides realize they have an attack that can defeat the other side without retaliation (very unlikely). If one attacks, they win - not pleasant, but not an existential risk. If neither ever attacks, they can slowly come to trust each other.

5) One-sided arms race - only one nation has MNT, is reasonably certain no others do, and quickly develops weapons that could wipe out or defeat any opponents without significant counter-attack potential. This is somewhat likely. They can wait and watch their lead slip away (a bad idea, but it is what the US did after WW2 with nukes), or conquer the rest of the world, or find another option. I think there is at least one other that is somewhat more hopeful. In any case, probably not an existential risk.

Chris Phoenix, CRN

Tom, in point 1 you're assuming that they'll think like nations, where massive destruction is unacceptable. In asymmetric war, the question isn't whether they can counter the attack, but whether they can survive it. Now, what happens if you have two asymmetric-war mentalities fighting each other?

You're also assuming that attacks would escalate. US/USSR fought proxy wars for decades without going nuclear. In doing so, I think they got a pretty good idea of each other's conventional capabilities. Suppose that I was continually learning whether you could counter my attacks, by letting rogue elements play with my toys and not policing them very carefully near our border? You'd either have to counter each attack, or be nibbled to death. If you failed to counter, my rogues would get bolder. Oops, so sorry. You wouldn't MAD me over a small border incident by rebels that I don't completely control, would you? Because you know I might MAD you right back... and the fact that the incident was a problem means that my attacks are pretty good. Eventually, you'd be nibbled to death, or I'd gain enough confidence to attack outright. Note that the vast design space of MM-built weaponry means that these attacks might not be very flashy. Nothing to get the UN excited over; nothing you can talk about in detail without revealing more than you want to about your defenses.

2) You're assuming that all actors are sane. Bad assumption when there are a couple dozen states involved.

3) I think the risk of accidents is more than modest. Interpenetrating multi-modal weapon networks on hair triggers... brrr! (I haven't seen anyone argue against Gubrud's paper yet.) We almost had fatal mistakes several times with our one-dimensional nuclear warning system!

4) and 5) If one actor conquers the world, the chance of massive unbreakable oppression appears pretty high. bostrom considers that an existential risk, and I'm inclined to agree.

Chris

Rik

The point is not that the US *should* limit technology transfer, but whether the US *can*? I will interpret your fears as those of a desktop nanofactory, coninuously pumping out drexlerian assemblers. I think if you want to prevent the spread of that, you'd have to limit the transfer of every nut & bolt of which it is made. That seems not only untenable, but also impossible.

So, these are still all strategies to avoid the massive disruption. What if, like climate change, there's no such thing as prevention, but only adaptation.
I think it is increasingly likely that we won't be able to avoid disruption. Not because of strong AI or desktop factories, but of robotics instead.
When robots enter the workforce, an enormous amount of people could be left jobless. Since everything you do or can do (economically) is linked to having a job, I expect the jobless to scream for desktop factories, once they know these exist. Would that lead to a Star Trek world? One can only hope so, because in my view all the alternatives suck...

Chris Phoenix, CRN

Rik, Drexlerian assemblers are not as scary as a nanofactory in any sense. A nanofactory pumping out automated weapons would be very scary. A nanofactory pumping out tiny smugglable nanofactories would be scary to a regime that wanted to control them.

You raise a point that I've often expressed as: Will we be retired or unemployed? Yes, if the jobless are allowed desktop nanofactories (and blueprints) then they can retire. Of course, even retirement is a bad shock for many people. Humans need a little competition. As long as it doesn't get out of hand...

Chris

Phillip Huggan

Tom, your scenario 4 seems most likely to me, why do you think it is unlikely? A workable MMed missile defence and diamondoid missiles are all that would be required. Even the former can be substituted for bunkers if one just wants to protect a certain subsection of the population.

Tom Craver

Chris:
Asymmetric war is only possible when one side denies itself use of its full military capabilities. That only happens when the weaker side isn't an existential threat to the stronger. Two opponents with that "mentality" would go about 10 minutes before both realize that asymmetric tactics won't work against each other.

Fighting by proxy is a way to *avoid* existential risks in a war where the opponents have weapons that pose such risks.

On #2 - No, I'm not assuming all actors are sane - I am assuming that paradoxically insane yet competent (i.e. able to take part in an arms race) actors are uncommon. Keep in mind that some nations are "crazy like a fox" - i.e. they use aggressive posturing to extort benefits from much more powerful nations.

3) Most existential risk would be due to errors occuring during a rare crisis period (like the Cuban Missile Crisis). Outside those periods, those in control of the weapons and aware of their limitations are likely to give the other side the benefit of the doubt - as happened several times in the cold war on both sides.

4,5) While I don't like the idea of one nation conquering all others, I don't see success in that as an existential risk to humanity.

Phillip Huggan

It isn't one nation that conquers another. It is a very small group (less than a dozen?) who have access to the MM factory infrastructure of that nation's secret weapons programme. If these people are idiots, get use to adapting their personal views on the way individuals should live.

Equal tech levels don't imply MAD if offensive weapons (missiles, WMD) are easier to prototype than are defenses to these (powerful particle beam missile defense perimeters, underground cities and subterranean cities surrounded by a thick perimeter of underground "mines"?).

Tom Craver

Phillip: Scenario #4 impicitly assumes near parity between the nations - else it'd be more like scenario #5. If both sides have missile defense and massive numbers of missiles, either the defense is dominant (so neither can significantly harm the other) or the missiles are dominant (in which case a counter attack is a real threat.

In order for a first strike to really be the safest path, it'd have to take full effect prior to the enemy knowing it was under attack. While there are stealthy or ultra-fast attacks, both nations will have a high incentive to set up detection and retaliation capabilities that are unlikely to be taken out.

Miron Cuperman

Just wanted to point out that the Mutual Assured Destruction doctrine does not apply when you cannot identify the enemy. With MM weapons, this is likely to be the case. Weapons will be too easy to smuggle, be untracable or be mobile over long distances without detection.

If you have 2+ potential enemies, and you are hit by an unknown force, you can't retaliate effectively. You could lash out against *everyone*, but such a doctrine seems very unstable... you may be hit by someone hiding in a bunker at an unknown location, and you'll be retaliating against everyone but the attacker.

Phillip Huggan

What if the missile offensive weapon systems (by missiles I really mean other technologies, but lets just cap it at missiles in this forum) are so superior that they annihilate the defensive countermeasures before they have time to function? In the context of Nanhattans, I would think spies will disseminate key research findings to other national programs, or at least trigger the start of rival MM research programmes in their own national affiliations. Tyranny and arms-race risks would seem to be maximized in the context of trying to "win" this type of race.

Phillip Huggan

The problem is that whoever initially develops MM must use it to conquer the world. And to effect this in a way that doesn't lead to tyranny is very hard. Leaders with strong vested interests in the existing geopolitical and corporate games are least likely to see the need not to exacerbate our soon to be obsolete 400 yr old capitalism models and 600 yr old nation-state worldviews and least likely to use MM products to force some sort of human rights charter. They will be most likely to win any brute-force race to MM based upon money, personnel numbers, "spy aptitude" and sabotage of rival programs or other forms of pre-emptive attacks. I think the only advantage responsible actors have is time, and that's why we should play all our cards now, while there are no Nanhattans or big corporate programs and we can still actually have a pure and transparent exchange of ideas and while our meme multipliers to spread to potential MM personnel have the logest time period in which to compound.

Mike Treder, CRN

Phillip, you suggest that we "should play all our cards now." What cards do we have?

In the early days of CRN, we reasoned that the only safe and responsible approach would be a single cooperative international MM development program, and a democratic, carefully balanced, global MM administrative structure.

We were told repeatedly that this proposal was either unworkable or unwise. I am still not convinced of that, but I do see that foolproof safeguards against dictatorial domination are essential, albeit very hard to maintain in a post-MM world.

Could universal transparency be a good "card" to play? Yes, I know it's another scheme that is widely considered impossible to implement, but if it were achieved, could it prevent global catastrophic disaster, i.e. war, chaos, or massive oppression?

Chris Phoenix, CRN

Tom, I may have used "asymmetric war" incorrectly.

We're used to thinking of warmaking ability as being proportional to economic resources, and economic resources as something you can blow up. That's "the last war." That kind of war can be won by attrition.

In an MM war, I think we'll see the situation where the side that wins is the side that's more willing to accept damage to its civilians and infrastructure. Even if the damage is very asymmetric, I win if I'm a dictator who's willing to see all my civilians killed and you're a democracy that can't afford to have more than 5% of your civilians killed.

It looks like MAD, but I don't think it's really MAD. It's more like eye-for-eye till both are devastated, except that--unlike in the past--losing most of your resources will not diminish your offensive capability. Is MAD in slow motion really MAD? It seems to me that it's not. And that seems dangerous.

Chris

Ps. I'm about to go on vacation so may not be responding much for the next week.

Mike Deering

The world is on the verge of catastrophic destruction. Good time for a vacation.

Jeff Herrlich


There's another factor to consider which I'm not sure is receiving enough attention. Strong Artificial Intelligence. I'm not talking about "smart" self-guiding weapons or anything of that sort, but rather, genuine, human-surpassing, intelligent machines. Real AI seems to be scientifically possible. The biggest obstacle to past attempts at creating AI has been hardware insufficiency. It has recently been estimated that an emulation of the human brain requires between around 10^14 computations per second and 10^17 cps. This lower estimate has only very recently been matched with current supercomputers. The point is that MM will create computers many times more powerful than what will be necessary to create strong AI, from a hardware standpoint anyway. Then it's only a matter of developing the proper software (even this could be sped up by using MM to enhance normal human cognition). A recursively self-improving AI would then very quickly become a superintelligence. A true superintelligence would be literally millions of time more intelligent than any normal human. With such an intelligent being, it seems that even a substantial technological gap between competing MM powers could potentially be overcome. I'm sorry to say that this scenario does not bode well for the deliberate creation of a benevolent, freindly AI but rather for an AI that (for whatever reason) is willing to take human lives. Such an AI would ultimately be a threat to all of humanity regardless of which side created it. Perhaps this will encourage the timely creation of a friendly superintelligence, before the MM arms race gets out of control, who can assist humanity with policy and negotiations. For more info on friendly AI, check out the Singularity Institute for Artificial Intelligence (SIAI) at their website.

Jeff

Tom Craver

Only marginally related, but this article illustrates why a nation can't entrust national security to any other nation, even if they are "friendly":

http://www.timesonline.co.uk/article/0,,2092-1879713,00.html

While the nominal topic here is that Britain threatened to use nukes against Argentina, the real interesting point to me was the statement that the missiles the French sold the Argentinians had "secret codes that render deaf and blind".

Also: "If our customers find out that the French wreck the weapons they sell, it’s not going to reflect well on our exports."

Tom Craver

We should break the discussion of proliferation into several "ages" to avoid confusion.

Consider the following periods:
- Early-arriving MM (next 5 years),
- "on-time MM" (around 2010-2025),
- late-arriving MM (2025-2050),
- post-MM - everything beyond a few years after any initial MM onset period, on the assumption that changes due to MM will be massive compared to any changes from now until MM is developed.

Strong AI will likely be limited to late-MM or "post-MM". While it's fun to speculate about those periods, unfortunately they are likely to be so different that we can't usefully anticipate their problems, let alone correct solutions. At best we might hope to plan for the Early and On-Time MM transitions.

Early-MM is unlikely, but the easiest to analyze - we can look at what's available now and in research labs, and project the immediate effects of MM proliferation. And maybe some of what we speculate on there, will still apply to the On-time period.

The comments to this entry are closed.