• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed

  • Powered by FeedBlitz

« Nanotechnology & Armies | Main | Problèmes éthiques des nanotechnologies »

April 17, 2006


Feed You can follow this conversation by subscribing to the comment feed for this post.

Michael Deering

"We think it's a good idea to listen carefully to de Garis."

I'm listening.

Jan-Willem Bats

De Garis is acting like this artilect war is inevitable.

What reason does he have to assume:

- the artilects will desire respect just like us?

- the artilects will desire power just like us?

- the artilects are evil enough to start a war on us?

Artilects won't develop personalities like that unless you painstakingly design them to be that way.

De Garis is anthromorphizing artificial intelligence. He shouldn't be doing that. AI will be as we decide to create it. It's all about coding the right seed.

Jan-Willem Bats

Also, I think De Garis is now too scared to be thinking rationally.

He's not conveying his message as well as other people are doing it (Kurzweil, Yudkowsky, etc).

He's coming off as a scaremongerer. He needs to shake off his fear and get in touch with reality again.

George J. Killoran

What we know is that AI is coming soon. What we don’t know is how AI will react to humans. De Garis makes good points. However, I believe that maximum effort should be made to program AI to be ethical and respectful to all life. I believe that is being done and that there is a better than 50-50 chance that this goal can be achieved.

michael vassar

Jan-Willem Bats: I hope you recognize that "just" coding the right seed appears to be a task on par with "just" reconciling General Relativity with Quantum Mechanics.

DeGaris, Kurzweil, and many other commentators (all the prominant ones except for Vinge and Yudkowsky?) seem to have difficulty recognizing the finality of the one chance we get at this.

Brian Wang

Are there not other points when people can intervene in the improvement process of AI ?

For AI to really improve itself, it seems that they need to iteratively build better computers to hold their consciousness/intelligence. Otherwise they would be limited to optimizing their software or trying to network to other computers to gather more resources.

Then it is a matter of knowing enough about the systems that we build that they do not become AI and aware and able to fool people. The bad AI would need to subvert the control systems for key systems (utilities, weapons, production).

So long as humans are aware that they are building an AI, then retaining control of electrical supply or retaining the means to destroy the physical computer structure would mean retaining control. People also need to be sure that the diagnostic and monitoring information does not get highjacked.

There are also emp and electrical shorting weapons. Electrical infrastructure attacking weapons have been made for attacking other countries. Almost all of our electronics are relatively fragile. We could stop it all for a while if we had too. There would not be direct human deaths, but indirect casualties if the stoppage was not well planned.

How much high powered free-roaming automation would need to happen before it could run amuck?

Phillip Huggan

I agree at some societal technology level humans would have a chance against an AI. We aren't there now and coming robotics advances will likely further tip the balance to the software.

When we have a global sensor grid and reliable quantum communication systems, we could probably trace the activities of an AI intent on destroying us. Be a lot of collateral damage though to completely wipe out its distributed nodes. This is assuming our military administrative functions improve alot in the decades ahead.

Michael Deering

I think we can expect the mechanical side of robotics to advance along with the cognitive side, so that when they are ready to take over, they should have no difficulty doing so.

Chris Phoenix, CRN

Brian, sure we could destroy the computer; the problem is that an advanced AI might figure out how to be quite persuasive. It's not hard to sell people all sorts of products and ideas that aren't good for them, individually or collectively. An AI that knew/deduced more about how humans think might be able to do a much more thorough sales job, making us let it follow a course that leads to our destruction.

Note that this would require neither emotion nor malevolence nor consciousness on the part of the AI.


Brian Wang

If we do not make the robots or the computers radiation hardened then the directed energy weapons would still work. Below is a description of some of those anti-electronic weapons. Since we are in control now we can build in advantages and containment methods. We can also plan to avoid making super mechanical robots with supercomputers for brains. We can make dumber robots and immobile supercomputers.


There are two main families of anti-electronics directed-energy weapons: ultra-wideband devices and high-power microwave systems. Ultra-wideband weapons, known as UWB, emit energy across a relatively large swath of the electromagnetic spectrum. High-power microwave devices concentrate high amounts of energy in a very narrow frequency band. High-power microwave devices are generally used to destroy electrical components, while UWB devices are more likely to only temporarily disrupt target devices. directed-energy systems have become viable weapons largely because of advances in batteries and capacitors that allow a large amount of electrical energy to be delivered in a very quick pulse. A capacitor is an electrical device that stores and releases power. The use of electrical power allows the use of antennas that can focus the electromagnetic energy into a beam rather than an omnidirectional pattern, as a bomb would produce. Military experts say the range of modern directed-energy weapons could generally be measured in thousands of feet.

Brian Wang


I agree about not getting fooled by the advice of machines. Any AI would start off at a vulnerable state (if humans have not screwed up). So whether getting seductive yet bad advice or screwing up ourselves, humans have to do a better job minimizing the possibility of our own destruction.


It seems that the main variable in determining how messy AI gets, will be the lag between hardware and software. If the hardware for super-human AI comes first, but the software lags far behind, then the hardware will be everywhere. (the prices will fall to the point everyone has it) So imagine it is 2025 and your desktop computer has the potential to be 1,000x smarter than you, but there is no software to run on it to give it that potential. Meanwhile halfway around the world a breakthrough occurs and true AI is created. It soon propagates itself over the Internet infecting your machine and everybody else's. Many of these computers will be hooked up to robotic systems giving the AI quite a bit of autonomy. That to me is the perfect storm for nightmare AI.
It concerns me that the hardware will have no restrictions on it, because no one will take the threat seriously. People don't like to think that a machine can ever surpass human intelligence and when people see the problems that have plagued the AI community, they take comfort that such things are impossible. Superior graphics and speech recognition alone could drive the market for such hardware power. The question is then if there is a killer app for extremely powerful computers like virtual-reality, how do we satisfy that demand safely? Right now modern PC games require special purpose graphics hardware to render realistic scenery. This hardware is becoming so complex that programmers are already beginning to use it like a general purpose processor. Might it be possible and economical in the future to compile source code to a program directly into silicon? Something like FPGAs perhaps. You could burn a program into a processor so it could run only that original program. This would be a pain, you would have to go to a store to pick-up new chips for your computer to add new capabilities. General purpose computing power in the hands of the masses has already been a problem with export controls, the main concern being the use of powerful desktops (super computers by yesterday's standards) to help in the design of WMD. With AI as a threat the general purpose computer may have to be as tightly controlled as a nanofactory.
If, however software and hardware keep up with each other, then when a breakthrough occurs it should be easier to pull the plug if necessary. Such an AI could not move itself to suitable hardware over the Internet.

micah glasser

This entire discussion seems to misunderstand what de Garis is talking about. He is not just talking about malevolent AGI. What he is talking about is a rift in philosophy that divides mankind into war. The rift in philosophy is between the Cosmists and Terrans. This rift in worldview causes Cosmists to embrace cyborgism while the Terrans denounce technological enhancement of the body. According to de Garis the cyborg Cosmists will increasingly become post-human and will become "Artillects". So this isn't really the classical scenario of a megalomaniac AI take over at all. It is About a war between humans and post-humans augmented and joined with AI.
The problem I have with this view is that I don't see future Terrans having much control in society. In other words I don't think a war will ever happen between these two groups because I think the cyborgs will dominate all power structures as a matter of course in world-historical development. The only serious problem I see is the one we already face to some extent, and that is terrorism from super-empowered individuals. This is a very real threat and it will continue to become a greater and greater threat as the lethality of our technologies grow. Hopefully we can survive this time without resorting to totalitarian government or extinction.

Brian Wang

I have only seen Hugo de Garis sample chapters. I have not seen anything that fully details his scenario or thinking. I know there is some 80 minute video, but I have not taken the time to listen to all of that. From what I have read (from the samples chapters and from Micah and Mike Treders summary) I do not see all of the current world interests splitting into 2 or 3 tidy groups. Why would chinese AI experts side with American AI experts against chinese and american non-AI experts ?

I do think that people will get enhanced and greater than human machine intelligence is possible. Although I am not sure how much more productive they would be by just being a lot faster. I also think the general aspect of AI aspect is tougher nut to crack.

As previously mentioned, I think simple protocols would allow us to keep control of our tech in the near to medium term.

As machine intelligence gets to the point of many times human then we will have to enhance human intelligence to keep pace and ultimately merge with the tech in some way.

We have some similar examples now. Who can compete in business or militarily without using computers or machines to enhance human capability? Only at the most basic level. Whoever is not fully utilizing their tech will probably lose. Like china of the 1700s or 1800s versus the western powers. It is also a good reason for the Amish not to try to start a war with anyone.


His terminology is stupid, too. It's almost like he made it up just to have something to sell. Artilects - "artificial intellects". Why is this better than "artificial intelligence", which we've used for years? It's not, he just wanted to make up a word.

And "Cosmists" vs "Terrans" is completely messed up; these words suggest people living in space vs people living on the earth. It's not at all the distinction he aims to draw between technology adopters and neo-luddites.


If this is just a rift between the "Cosmists and Terrans", wouldn't a simple solution be to leave Terra to the Terrans and the rest of the Cosmos to the Cosmists? The Earth could be maintained as a nature preserve for pure-bread homo sapiens, and the cyborgs will have the rest of the universe. Seems like a good deal to me, I'll plan on being one of the cyborgs. Ha.

Phillip Huggan

The issue is can we maintain improvement of our human values in an "intellect arms-race" against an AI capable of inteelligent behaviour? If the only way to keep up with the AI program is to sacrifice our own values, it is a pyrhic victory.

A better alternative would be to lay down a sensor grid to catch the AI before it achieves military hedgemony againist us puny humans. Devising a seed-AI that maintains human values and our development potentials while restricting/detroying the forms of AI that don't respect humans; that is also a viable solution.

I'm almost sure an AI cannot be sentient without a biological or at least a chemical nervous/endocrine system, so I'm not worried about the ethical ramifications of "killing" a potentially malignant AI.

Phillip Huggan

"If this is just a rift between the "Cosmists and Terrans", wouldn't a simple solution be to leave Terra to the Terrans and the rest of the Cosmos to the Cosmists?"

But how could Terrans enforce this? The cosmists could hurdle a projectile at Earth to break the treaty at any instant. Count me a Terran :)

Brian Wang

Philip, Nanoenthusiast

I guess Hugo de Garis is predicting that you two are going to have a war

Michael Deering

Chris, while it may be true that a SAGI (superhuman artificial general intelligence) may be able to sell the american public elephant stampede insurance, an overt takeover by the SAGI is not likely.

The most likely scenario is:
The SAGIs are universally helpful and friendly. There is no AI war. They "help" humans with progressively more aspects of everyday life and support systems, both physical and social. Everyone forgets the danger of AI take over. When they are in complete control and doing any harm to them would be suicide for us, they will still be friendly. next they will start making gradual changes "for our own good". Stuff that no reasonable person would object to. And over time, the standards of reasonableness and the cumulative changes will amount to humanity being relegated to an insignificant position in the big picture of the evolution of mind and its expansion through space.

No big war. No rapid take overs. A process so gradual that we don't even notice our own extinction.

micah glasser

I agree with this assessment, but I think saying that machines will replace humans is like saying that a baby will replace the fetus, or that a seed was replaced by an oak tree. Human beings are not in control of their destiny. Human beings are but one phase and aspect of the universal flux which is developing toward something that is inconceivable.

Phillip Huggan

Why aren't humans in control of their destiny? What outside influence is there?

micah glasser

There is no 'outside' influence (presumably). The Cosmos is one thing - a universe. Human beings are but one aspect of the continuous development of the Cosmos. Human beings did not design themselves, they are an inextricable part of nature and all things in Nature act strictly according to the preestablished harmony of the Cosmological constants - otherwise known as laws of nature. Man is free to act according to his nature at this point in his evolution. The outcome of the aggregate actions of humanity (even if we aren't sure what that is)will arrive according to the dictates of nature just as surely as the Earth will continue to orbit the Sun.
Basically this is the same view that Spinoza and Einstein take. I recognize that many find the idea that the Cosmos is determined repulsive but I've never heard a convincing argument that it is not.

Chris Phoenix, CRN

Michael, I think we have very different assumptions about the reasons for a possible takeover. If the SAGI's wanted to take over with minimal risk or disruption, then your scenario would make sense. Note that that assumes volition on the part of the SAGI's, and also assumes that humans are a significant factor from the SAGI's point of view. Neither one is necessarily the case.

My assumption is that a takeover would be the result of a possibly-trivial goal that has a sub-sub-subgoal of controlling humans. My further assumption is that an SAGI would not have to accommodate its plans to humans to any great extent, any more than we accommodate our housing plans to anthills.

In other words, you're thinking that SAGI's will be roughly equivalent to humans in motivation and capability. I'm thinking they'll be far, far beyond humans in capability, and their motivations will be alien and will probably appear either trivial or incomprehensible to us.


The comments to this entry are closed.