• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed

  • Powered by FeedBlitz

« Touchy, Touchy | Main | The Looming "Nano-Divide" »

January 29, 2004


Feed You can follow this conversation by subscribing to the comment feed for this post.

Joe Peden

"Did the Bush Administration create a new American empire—or weaken the old one?"

I guess it might be hard to figure out where this article was going?

The Iraq war is justified by the Bush Doctrine, and the fact that Iraq is a near perfect example of where it can and should be employed. The action comes under the rubric of self-defense quite nicely, without appealing to "empire". So far the object of the Doctrine as applied to Iraq, that is, the prevention of more 9/11-like attacks, appears to have been a success -- though this might be coincidental.

I read only about 1/2 of this article, as it had already taken so many licenses with reality and judgement that I considered it a waste of time to go on. I got tired of groaning.

I don't know the answer to this one, but: has the U.N. yet defined what "serious consequences" in Res. 1441 means if not military action?

So far as I can see the term "empire" is simply here defined as "whatever the U.S. does", for whatever reason. Thus, I am forced to agree that the U.S. seeks empire, but we do not know if it is a new one, or just the old one weakened. Does anyone know if the article answered this question?

Mike Deering

Article Title: There's Only Room On This Planet For One Of Us

Author: Mike Deering.

Thesis Statement: Our current political situation of 160 independent sovereign nations on Earth is going to become non-viable after the development of molecular manufacturing technology.

I am sensitive to the political and cultural realities that make the idea of the US taking over the world a non-starter. I understand that the American public has no lust for world conquest. To those who argue, "It just ain't gonna happen." I can only ask, "What if that were the one remaining alternative to the end of humanity?" Then they look at me a go, "Huh?"

When I say "non-viable" I'm not talking in the context of public opinion or US history; I mean non-survivable for the species. After the maturity of nanotechnology, enemies can not be tolerated. Political unknowns can not be tolerated. The technology is too powerful. The potential weapons are too powerful. There is no possibility of treaty verification because the technology is so small. In a planetary geopolitical environment with multiple independent sovereign nations, and one of them could in complete secrecy, develop a novel weapon system capable of destroying all the other nations in a first strike, completely and absolutely, and without the possibility of retribution. The possibilities for the design of such a weapon, while not infinite, are so great that it is not feasible to predict and prepare for them all. Therefore there is no effective defense, at least until the technology has been around long enough for almost all of the capabilities and uses have been explored and are known, which could take decades even at post Singularity rates of development.

Whether the U.N. was a good idea in the past or a big mistake; whether you have favored a foreign policy of isolationism or engagement; whether a one world government is politically, socially, or culturally possible is immaterial to the question of species survival. Things are not going to be like they always were. There is a new set of contingencies coming. New thinking in our most basic areas of life are required because the old arguments will no longer apply.

Nanotechnology is not going to be just another technology to add to the mix. It will revolutionize every existing technology and create many new ones, including Artificial General Intelligence (AGI), re-engineering of living organisms, colonization of the rest of this solar system and other nearby systems. The changes nanotechnology will bring to the human condition are almost beyond superlatives to describe, physical immortality, wealth beyond imagining, personal freedom, self-sufficiency, and independence from all other humans or institutions, complete interchangeability between data and matter, data and life, data and people.

The only way that humanity is going to survive is for there to one authority on this planet, and for that authority to have the right to monitor all activity. If you can't live with that situation, you need to get off the planet.

Joe Peden

Hey Mike, how do I get off the planet? You sound rather crazed. But,if what you say is true about the future as defined by nanotechnology, then there is going to be one hell of a fight prior to the eventuation of this future. I therefore reccommend that the U.S. embark upon a Total Empire Strategy now, and take over the World. This would also be convenient for the current critics of the Bush Doctrine, who claim it is merely a subterfuge to mask our overwhelming desire for Empire. It would prove them right, then they might be happy. What say you?

Mike Deering

Hi Joe, Getting off the planet before the end of the year might be a challenge. But soon nanotech will give us the materials, propulsion systems, and energy to make space travel easy, even comfortable. You might want to try and get off in the first wave. If I sound crazed it's only because I am aware of a different reality than most people, the reality that the Singularity is coming. As for the Bush administration taking over the world, I don't think they have yet the capability to cleanly accomplish the goal. And then there would be political problems if he tried to stay within the current constitutional government structure.

Chris Phoenix, CRN

Mike, it's true that verification of untrustworthy people's activities will be a very hard problem. But there are several possible alternatives.

1) Make people trustworthy. There will be lots of ways to do this: lie detectors, drugs... some are even relatively non-intrusive.

2) Limit the possible activities. For example, with built-in technical limitations. (Yes, this can't last very long and creates other problems. But it's a possibility that should be considered.)

3) Surveillance is more reliable than verification. With round-the-clock surveillance on every human, you can detect attempts to break the surveillance. Who watches all that video? Computers can discard 99% or maybe even 99.9% of it as being ordinary. (They're already working on this at MIT, and have already been somewhat successful.) Then humans can watch the rest.

I'm not saying I like any of these ideas. But I don't think we can say at this point that global meltdown is inevitable; there are lots of powerful options we haven't considered yet.


Janessa Ravenwood

Well, I should rather hope you don't like those ideas, as you are endorsing outright fascism if you do.

Mike Deering

In 1932 Mussolini wrote (with the help of Giovanni Gentile) and entry for the Italian Encyclopedia on the definition of fascism.

This is what he wrote:

"The foundation of Fascism is the conception of the State, its character, its duty, and its aim. Fascism conceives of the State as an absolute, in comparison with which all individuals or groups are relative, only to be conceived of in their relation to the State. The conception of the Fascist State is that of a directing force, guiding the play and development, both material and spiritual, of a collective body. The Fascist State is itself conscious and has itself a will and a personality. The Fascist State organizes the nation, but leaves a sufficient margin of liberty to the individual; the latter is deprived of all useless and possibly harmful freedom, but retains what is essential; the deciding power in this question cannot be the individual, but the State alone."

I think what Mussolini was describing here is a super-organism, the state conceived as the mind of the organism, the nation as the body, and the individual citizen as an individual cell in a multicellular organism. In a competitive community of nations, elevating consciousness and intelligence to the state level would confer a significant survival advantage. He was particularly disdainful of the intelligence of democracies. If we were to analyze the behavior of the U.S. government, not the people, not the scientific advancements, not the accomplishments of free enterprise business, but solely the behaviors of the government, would we rate it at the intelligence level of the sea sponge, the sea turtle, or the dolphin? With the development of real A.I. Mussolini's vision of a conscious state if finally realizable. And the development of nanotechnology will give this super-entity the capability to control all aspects of it's behavior down to the individual citizen level, as he says, depriving all useless and possibly harmful freedoms. After the Singularity, the possibilities for the course of humanity become infinite, and a real Fascist State is certainly one of them, But I don't think it is one that any of us would choose. And I am sure it is not what Chris or I are advocating. What we are advocating is something called the transparent society.

The transparent society has been written about with great clarity by David Brin, PhD. He sees the solution to all problems of human misbehavior as total surveillance of all people and all activities all of the time and accessible by anyone at anytime for any reason. In this environment of no privacy, all crime and deceit would disappear. No one could get away with anything and everyone would be as safe as possible.

If you are interested in learning more about the Singularity, click on my name at the end of this message.

Chris Phoenix, CRN

Janessa, there are lots of abusive and oppressive styles of government other than fascism. The three policy options I listed are very different from each other. If they all lead to fascism--so directly that to promote any of them is to endorse fascism--then perhaps with sufficiently high technology, the only alternative to fascism is anarchy. If you want to argue that, I'll listen.

But if all these policies don't lead directly and inevitably to fascism (as opposed to feudalism, monarchy, empire, oligarchy, theocracy, communism, kleptocracy...), then perhaps your comment was more knee-jerk than constructive. This discussion is difficult enough without unwarranted use of hot-button words. Especially when they are used as unwarranted accusations about other people's beliefs.

Just to be clear: One of my goals is to *avoid* any kind of government or administration that would require central planning of everything and would stifle human diversity and freedom. It's not clear how to avoid that--the technologies will be there for the taking.

Another of my goals is to avoid destructive people getting their hands on extremely powerful technologies.

The trouble is that there are many sources of power and many kinds of destruction and many motivations to be destructive. Keeping those apart in all important cases is not going to be easy. Have you studied Pol Pot? How would you like to give Pol Pot or Jim Jones or the Aum Shinrikyo guy the power to destroy the human race? It's hard to rank extreme evils. But given a choice between Hitler, Stalin, and Pol Pot as world leaders, the one I'd be most scared of is Pol Pot.

So not only is it unfair (and rather annoying) for you to imply that I'm promoting fascism, it also shows a failure of imagination on your part: fascism, though terrible, is not the worst possibility...

Joe Peden

AI devotees, now the fun part: what is intelligence?

Joe Peden

"Make people trustworthy. There will be lots of ways to do this: lie detectors, drugs... some are even relatively non-intrusive."

Chris, you are already into a big problem -- the same one characterizing all thought controlling systems demanding "correct" thought: no one knows what "trustworthy" is, if one's trustworthiness is evaluated by his/her thoughts. Any thought can be seen as a threat, as evidence of untrustworthiness. One can even doubt a "correct" thought. Again, I mean any thought.

But,then,there are always new thoughts, even and especially arising within those who think they know what correct thought is. It is not possible to evaluate these thoughts. Thoughts are not things and cannot be compared to a model. The function of thought is not to think correctly. "Correctly" is a word-thought itself. Again, it is not a thing.

Lie detectors do not detect lies. They record some simple physiologic changes which are interpreted by a human, often incorrectly. Who gives the lie detector to the interpretor?

There is no such thing as a truth serum. One would have to have a standard of truth independent of a drug which would then be evaluated as to its efficacy regarding eliciting the truth. What is this independent standard? There is none. Truth does not exist as a thing. There are no ultimate truths. Truth is a creation of thought processes. There are always new truths. No one agrees what the truth is. Watch this discussion.

The "untrustworthy" will have to be killed by the "trustworthy". Then the trustworthy will have to turn upon themselves, because of new thought, different thought, threatening thought. Finally the last remaining entity will have to commit suicide, because of its own new thought, which might not be trustworthy.

This is a sadomasochism which inheres in any idea of true or correct thought which sees thought as a thing which can be compared to a model. This model is also known as dogma. It is dead thought. The great state organism dies. The ants survive.

Brett Bellmore

I don't believe it's possible to keep bad people from getting their hands on nanotechnology, any more than it's been possible to keep them from getting their hands on guns. The effort to do so could, however, put bad people in charge of the technology. Resort to nasty measures in the attempt, and the effort will automatically get staffed with people who are comfortable with resorting to nasty measures.

On the other hand, let's not overstate the problem. TODAY we've got tech which would allow evil people to be vastly destructive. And yet you don't see terrorists using fuel-air bombs, for instance. Even that Japanese cult really did a pathetic job of attempting biochemical attacks. Evil people just don't seem to be very, well, clever. I'm not saying that it's logically impossible for somebody to be very evil, and very smart, but it's seldom seen. Evil seems to be a pathology which goes along with stupidity. Look at the average IQ of criminals, for instance.

The real threat is at the state level, because that puts the clever people at the service of the evil people, generally denying them any choice in the matter. And so, the real solution is to get rid of evil states. Even as we're starting to see that the existance of nuclear weapons makes that necessary.

The history of the 20th century tells us something: Free nations don't go to war with each other. They've got better things to do. With nanotech to relieve resource shortages, they'll have even less reason.

The solution, in short, is liberty. Not control.

Brett Bellmore

So, what do I suggest?

1. Forget this vast, international, all-encompassing regulatory scheme of yours. Not only isn't it going to happen, it's so obviously not going to happen, that you identify yourselves as impractical dreamers if you seriously propose it. It's a credibility killer with anyone who matters.

2. Our government, and some others, have already been convinced that in an age of nuclear weapons, the very existance of "rogue regimes" is a threat to everyone. Emphisize to them that when nanotechnology arrives, that threat becomes a full blown nightmare. Getting rid of the world's rogue states is a more important matter than they currently realize! It has to be done either before, or shortly after, the arrival of true MNT. Fortunately, unless we deliberately handicap ourselves, we WILL get there before they do, and a window of opportunity will open for us to take them out before they can deter us. When that window opens, our leaders have to ALREADY be convinced it's necessary, there won't be time to bring them around then.

But. The need to take out North Korea doesn't equal the need to take out France. We need to get rid of the bad apples, not the whole barrel.

3. Dispersing mankind throughout space, in order to rule out the worst case scenario of universal extinction, is priority one. FIRST rule out extinction, THEN make things work out nice. It helps out that a properly run dispersal effort will quite naturally suck up the very people who would be most inclined towards bootstrapping and other underground activities, and get them off Earth, and busy elsewhere. You don't HAVE to take over society in order to live your dreams, if you can go off someplace else and establish a society of your own.

Which is why you don't send out one colony, or two. Within very broad limits, you send out anyone who wants to go. You don't force the people you won't let go to try to build their own way into space around you.

4. A certain amount of smart control over the nanofactories is indeed a good idea, you don't want 4 year olds punching up hand grenades, for instance. But always keep in mind that restrictions create black markets, and no control system is going to be perfect. You shouldn't TRY for complete control, just amelioration.

5. Society IS going to be terribly disrupted, and there's no avoiding it. People don't have to suffer greatly due to that disruption, if nanofactories can be used to make them essentially self-sufficient in food and shelter.

Joe Peden

Nice posts, Brett. My only problem is with the "leaving the planet" option/recommendation. There is nowhere to go where anyone could have any reasonable hope of "living their dreams", unless they were extremely desperate in the sense of fearing that this could not be done on Earth without a greater chance of not living.

Now, though, if you can get me into the Starship Enterprise, the option looks very good. But I wouldn't leave right away.

Brett Bellmore

With nanotechnology, there's scarcely anywhere in the solar system that isn't capable of being colonized, and quite comfortably. True, you'd spend all your time in either artificial habitats or wearing protective gear, but that's no obstacle to the good life. Artificial habitats can be made darned comfortable, and virtually indistinguishable from a natural enviroment.

Chris Phoenix, CRN

Brett, I agree with you that evil states will be a serious problem in a nanotech world. I'm not so sure that this is equivalent to saying more personal liberty is automatically safer. I suspect there are diminishing returns at all extremes. For example, regulation of cars and driving makes our roads safer than if we had no regulation. But too much regulation tempts noncompliance, and people start driving without insurance or training or in unsafe cars.

Joe, I don't think we should, or do, expect *complete* and *eternal* control. I'd be happy if we could make it more difficult to break the tech restrictions than to bootstrap a new nanofactory. And that difficulty will rapidly decrease.

But I do think that international cooperation and administration can help. Just as it helps policing and pollution control today. I think it'll be clear to most nations that they'll be more secure if they cooperate with each other. If not, then the idea's a non-starter. But I don't think merely proposing it should damage our credibility.

Perhaps we need to work on our communication skills. Whenever we propose anything, it seems like people hear it as suggesting the extreme. That's usually not what we mean. It'll take a combination of a lot of strategies to smooth the Step enough to avoid random chaos.


The comments to this entry are closed.