Excellent article by Lawrence Lessig in the April 2004 issue of Wired magazine, called "Insanely Destructive Devices: trying to defend against self-replicating weapons of mass destruction". Lessig makes the logical point that "if we can't defend against an attack, perhaps the rational response is to reduce the incentives to attack."
That's along the lines of one of CRN's key proposals -- reduce the need for competing programs of molecular manufacturing development by making access to the basic technology available to all.
As we wrote in our paper, "Three Systems of Action":
Once nanofactories can be built, people will demand access to them. If legitimate access is not provided, some of the "have-nots" will obtain black market devices of comparable functionality. Such devices would presumably be uncontrolled, thwarting any attempt to regulate, tax, or charge royalties on products they produce. Since a small nanofactory can make a bigger one, and a large one can make thousands of duplicates, smuggling would be impossible to prevent.To minimize the black market, it is in the interests of both Guardian and Commercial organizations to supply nanofactories, as capable and flexible as possible, to the entire global population. This flexibility must include the ability to build certain products with minimal royalties or taxes—preferably zero added cost, because anything else would only encourage illicit factories.
Of course, the factories cannot be completely unrestricted. Certain weapons, substances, and dangerous nanobots should be prohibited or restricted, and all commercial intellectual property should be controlled according to the wishes of the owner. However, aside from these limitations, Information system workers should be given free rein to design and give away any product. This will greatly reduce the pressure for illicit factories.
"Certain weapons, substances, and dangerous nanobots should be prohibited or restricted..."
And right there you just guaranteed your black market. Illegal drugs are at least a $60 billion market in the US alone, after all.
Posted by: Brett Bellmore | April 04, 2004 at 05:27 PM
Fantastic article by Lessig and very much in line with the typical public response, first try to ban it or control it and finally disseminate it.
The position that many of us have arrived at is the desire to have the "thing" (nanofactory etc)without the trust, this is and remains a power and control issue with those who resist relying on and suppported by a belief and mind set based on 1800 economics. The bottom line being "we" or maybe "they" dont want the loss of the social controls of supply and demand or trust that anyone else will use such a tool responsibly.
Bretts comment supports this and highlights the desire of one (any)individual to control another, drugs are a problem (and earn the USA huge $$)only because one set of individuals dont want other individuals to have access to something (however self destructive that desire might be).
Perhaps a working Nanofactory would also result in a step away from the patriarchal approch to power and control most cultures seem to cling to (opps bit off topic...or is it)
Posted by: Nick Robilliard | April 04, 2004 at 08:15 PM
"... all commercial intellectual property should be controlled according to the wishes of the owner."
Let me diverge here for a moment and talk about economics. Our whole economic system is adapted to operate in an environment of scarce resources. Where resources are for practical purposes unlimited the economy disappears. No one but dive shops charge for air. I am no longer charged by the minute for internet access or long distance phone usage. Restaurants don't charge for tap water. The exception to this is IP, software, books, movies, some internet content. Why is this? Simple, creative people want to spend all of their time doing their creative thing. That means they must make it pay the bills. Another reason is that some creative activities require large expenses which are covered by businesses which end up owning the IP. Most, not all, highly creative people would prefer to give away their creations for the widest possible utilization and appreciation but are trapped by our current economic environment of scarcity. When scarcity changes to abundance and people are freed from their economic bonds the economy will disappear or be greatly reduced, then attitudes about IP will change. Public domain IP will dwarf "for sale" IP in quantity and quality. Another factor that will drive down the cost of IP that no one on this blog is talking about is the advent of A.I.
What can we expect when the economy changes? Almost all jobs and businesses will disappear. Crime will be greatly reduced. Cheap products and robotic labor will destroy the economy. Unemployment will reach approximately 100%. Default and foreclosure of mortgaged property will be almost universal. All credit will be terminated. State unemployment coffers will be depleted in the first month. Public assistance programs/welfare will be terminated due to over utilization. Property tax foreclosures will be endemic. Retirement funds will collapse, even government ones. Governments will become insolvent. Paper money will be worthless. Precious metals will drop in value due to increased capability of molecular mining. The only thing that will still have value will be land, and raw or developed will be of equal value. And location won't matter either, in the middle of Manhattan or the middle of the Mojave desert will be of equal value. During the transition period, however long that takes, most people will be living on savings. Plan accordingly.
Mike Deering, Director,
http://www.SingularityAwareness.com
Email: deering9 at mchsi dot com
Posted by: Mike Deering | April 05, 2004 at 08:13 AM
Somehow I doubt that the advent of MNT is going to obsolete money. After all, it won't eliminate all forms of scarcity... Just scarcity of manufacturing capacity.
There will still, barring the introduction of benevolent AI, be a shortage of design capacity. Even if designers like myself would be working for the sheer joy of creation, getting us to design something YOU want would require an incentive.
Then there's scarcity of rare elements, land with a nice view, power... No, money will still have it's uses, even if it won't be necessary for day to day survival.
Posted by: Brett Bellmore | April 05, 2004 at 08:40 AM
Mike wrote:
Cheap products and robotic labor will destroy the economy. . . Default and foreclosure of mortgaged property will be almost universal. . . State unemployment coffers will be depleted in the first month. Public assistance programs/welfare will be terminated due to over utilization. Property tax foreclosures will be endemic. Retirement funds will collapse, even government ones.
You've got a contradiction here. If we get cheap products the government can supply welfare and pension payments in kind as food and clothing shipments. A major crash will see a "bank holiday" (as in the Great Depression) with new laws restricting foreclosures. Turmoil, yes, but we're not going to have people starving or sleeping under bridges.
During the transition period, however long that takes, most people will be living on savings. Plan accordingly.
The savings we keep in our banks? If paper money's worthless so's our accounts. If you foresee that kind of collapse keep a few years worth of food and ammo handy. I think there's likelier things to worry about.
Posted by: Karl Gallagher | April 05, 2004 at 12:41 PM
Perhaps the biggest contradiction is that, in the middle of listing all these disruptive and demoralizing consequences, Mike says: "Crime will be greatly reduced." I think this is as unlikely as the idea that nano-anarchy will lead to everyone living peacefully together.
Posted by: Chris Phoenix, CRN | April 05, 2004 at 08:45 PM
You're right. That post was not very clear. In my attempt to be brief I left out so much detail that the main points didn't make any sense. My ideas about the Singularity are based on certain assumptions, and looking at the whole of technological development, not just nanotechnology, you get a different picture of the future.
Assumptions:
1. Nanotechnology will first be used to build computers millions of times more powerful than are available today at almost zero cost. This will result in the almost immediate development of super-human artificial general intelligence (SAGI). Just as there are many paths to nanotech, there are many paths to SAGI and all of them are greatly accelerated by more powerful computers.
2. The combination of nanotech and SAGI will rapidly develop a complete knowledge base of molecular genetic and proteomic biological functions. At this point, whoever controls the technology can do whatever they want.
Conclusions:
1. The genie machine will be invented. The genie machine will be a combination of a nanofac and SAGI in a Utility Fog implementation. It will be distributed to the population not by the government but through a peer to peer network. Each genie will be dedicated to serving and protecting a particular human being but also be connected to the other genies by a wireless network.
2. The economy will collapse and all those other things I mentioned will occur, but few will care because they will have become independent from the economy.
3. The traditional power structures of the world, governments, large corporations, rich people, will band together and attempt to achieve centralized control of everything. A conflict will result between the worldwide community of SAGI genies and the government's single super-SAGI. The duration and outcome of this conflict are not known.
Okay, you can all start laughing now, but when you're done, think about whether anything I have described is impossible. There are many possibilities in the Singularity. As the CRN guys say, if someone doesn't think about what kind of Singularity we want, we may get a Singularity we don't want, or something like that.
Mike Deering, Director,
http://www.SingularityAwareness.com
Email: deering9 at mchsi dot com
Posted by: Mike Deering | April 06, 2004 at 07:32 AM
I'm sorry, but the idea that a benevolent "genie machine" is the natural consequence of powerful computers means that your ideas haven't developed at all since MNT was proposed almost 20 years ago. Given that we may only have five years left, this us upsetting, but suggests that there are better ways to spend time than debating this topic.
Posted by: michael vassar | April 06, 2004 at 11:00 AM
Bwahh ha ha!
Ok, now that I'm done laughing, let me be serious. A "Genie" machine, assuming it could be built, is about as likely to end up being Jafar as Robin Williams. Maybe more likely. I'm reasonably confident that we'll be able to produce an AI of greater than human intelligence using nanotech, some time this century. (Equivalent to human intelligence, through detailed neural emulation of a particular brain, almost dead certain.) That we'd be able to guarantee it's benevolance should we do so, is far more dubious. Frankly, we'd be far wiser (I was going to say "smarter", but fear being punished.) to concentrate on "Amplified Intelligence" rather than "Artificial Intelligence"; At least that would put our interests in control more or less automatically.
Anyway, who wants to be a moderately evolved chimp pampered by a genie, when they could be a god? LOL
Posted by: Brett Bellmore | April 06, 2004 at 03:48 PM
doesn't Fermi's Question say something about benevolent A.I.? First possibility is that A.I. is either benevolent or doesn't have any feelings at all. The other possibilities are covered in Rare Earth, or Isaac Asimov's "Extraterrestrial Civilizations."(Asimovs book as noted came over ten years earlier and said much the same things with regards to the various constraints on technologically dependent species chances of existence)
While reverse engineering the brain may give some good design engines, I don't think that makes for an artificial lifeform intelligence that has any free-will or feelings one way or another. Intelligence is not necessarily feelings and consciousness. We have design engines already; none of them have either feelings or consciousness no matter how fast the hardware you run that software on. I think you could have singularity without the machine coming to consciousness and getting free-will and so on and so forth.
Posted by: davidoker | April 06, 2004 at 06:44 PM
If reverse engineering the brain at the level of individual neurons didn't produce not just a functional [i]synthetic[/i] intelligence that claimed to be conscious, (In fact, claimed to be the person whose brain had been reverse engineered.) it would be pretty good proof that our models for nerve function were inadequate.
A low level neuron by neuron emulation of a particular brain would be a wonderful test bed for exploring ways of enhancing intelligence, even if it would be compuationally very inefficient compared to a higher level implementation. The chief advantages are that it wouldn't require understanding intelligence, just cells, and that the motivations of the resulting AI would be fairly predictable. There are ethical issues, though. Informed consent might be obtained pre-mortum.
Posted by: Brett Bellmore | April 06, 2004 at 07:28 PM
I recall they were going to try doing this to a monkey a few years back; i take it was not successfull . . .
Posted by: davidoker | April 06, 2004 at 07:29 PM
I agree with posters here who say that this is not the end of economics. I've yet to hear a convincing argument that nanotechnology is going to do away with scarcity. It may greatly reduce many costs, but it will not eliminate cost altogether. There will always be the opportunity cost of a person's time as they decide on what they want their nanofactory to build for them, for example.
Even if one day the artificial life achieves sapience, all that really means is that we have another set of starving artists that will want payment for their works. To design them so they just give their labor to us free seems like slavery to me.
Oh, and I agree that intelligence amplication is more likely to achieve results in the near term than artificial intelligence.
Posted by: Mr. Farlops | April 07, 2004 at 01:35 AM
"I recall they were going to try doing this to a monkey a few years back; i take it was not successfull . . ."
Check again. Last I heard, they were up garden slugs, maybe small insects.
Posted by: Brett Bellmore | April 07, 2004 at 02:18 AM
Mike (and others), what do you think of Eliezer's claim that (if I understand correctly) any SAGI that's not *very* carefully designed and trained will quickly and inevitably wipe us out as a side effect of doing the first thing it's asked to do?
I haven't heard any real argument against it... if no one has an argument against it yet, maybe we should start taking it seriously until we find one?
Posted by: Chris Phoenix, CRN | April 08, 2004 at 12:06 AM
I havn't read the whole website of singularity institute, so if you could give me a specific link to where he talks about out of control A.I. please?
As for A.I. deciding to take out humanity at the first command we give it. I'd just like to start out with the idea of consciousness as autopoiesis, or that those ideas are where I'm coming from. I won't describe all that right now, as I want to finish reading my last Jacob Bronowski book "The Western Intellectual Tradition." soon enough. But, I would like to say that a feature of my understanding of intelligence at least and not necessarilly consciousness is negative feedback. A particular example of negative feedback is James Lovelocks Gaia idea; yes, the earths ecosystem isn't biological as to be conscious, but it does have some characteristics of being alive. Much like any multicellular lifeform, it is composed of many lifeforms, and these lifeforms come together to operate the multicellular lifeform. The earths ecosystem is at least much like a superorganism like ants and bees. The Gaia idea is that the bacteria of the world keep the temperature and other chemical processes in balance for like to live(the cells may do the same in multicellular animals and plants; could cancer be an effect of cells not being part of keeping the chemical balances right for them?). The Gaia does this by negative feedback. If it is too hot or chemical balances are not quite right here and there, it increases or decreases appropriatelly somewhere in its web of negative feedback loops. In the process, it may destroy certain groups of organisms both single cell and multicell.
I'm supposing that an A.I. may destroy some(who knows how many; maybe totality?) intelligence if they are not part of its solution, but that assumes you hook it up to a gun or something. If an A.I. starts killing with no remorse, then I have to questin whether it has genuine feelings and hence consciousness; yes, killers have feelings as I've learned recently(i was recently stuck up by a gun; lost my backpack with all my books; hope he enjoyed the scientific american articles about original like protoplasm and copies of chapters from Morris Klines "Mathematics and Western Culture" not to mention Carl Sagans "Contact."; anyways, he could have shot me dead, or even took out my knee just for the hell of it, but he didn't; Why? Because he's an ametuer and not emotional able to) In other words, what you have artificial intelligence, not a new consciousness.
Posted by: davidoker | April 08, 2004 at 06:24 AM
The real danger of a malign superhuman artificial intelligence, (Or just one with it's own agenda we're in the way of.) isn't that it would immediately attack with whatever tools it was equiped with. That wouldn't be very smart of it, would it? The real danger is that it would employ it's intelligence to gain our trust, and only turn on us after we had come to rely on it, and had placed it in a position of great power.
Creating artificial intelligences to do our thinking for us is uncomfortably close to slavery, and creating such intelligences that are SUPERIOR to us, and trying to enslave them, would be remarkably foolish. Our best bet is accepting that we have to do our own thinking, if we're to be in control of our own lives.
It ought to be possible to use nanotech to improve our own capacity for thinking. We've got the start of that in drugs like modafinil, which allow you to refrain from spending a third of your life unconscious. Fairly unsubtle interventions could keep neurotransmitter suppies topped off to prevent mental fatigue, and a greater understanding of the nature of intelligence ought to allow us to substantially boost the IQs of even existing people, let alone the next generation.
Posted by: Brett Bellmore | April 08, 2004 at 08:35 AM
A genie machine is the very realistic and natural combination of Nanotech and A.I.
I just read a very interesting article on the Foresight site that talks about the security implications of the hand in hand development of nanotechnology and A.I.
Nanotechnology and International Security
by
Mark Avrum Gubrud
Center for Superconductivity Research
University of Maryland
College Park, MD 20742-4111
[email protected]
http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/index.html
Here are some excerps from the article:
"However, assembler-based nanotechnology and artificial general intelligence have implications far beyond the Pentagon's current vision of a "revolution in military affairs."
"Advanced molecular manufacturing based on self-replicating systems, or any military production system fully automated by advanced artificial intelligence, would lead to instability in a confrontation between rough equals."
"The possibility that assembler-based molecular nanotechnology (Drexler 1986, 1992) and advanced artificial general intelligence may be developed within the first few decades of the 21st century presages a potential for disruption and chaos in the world system."
"Further, artificial intelligence will displace skilled as well as unskilled labor."
"Molecular manufacturing based on self-replicating systems, and superautomation by artificial intelligence, will also profoundly alter the issue of cost."
"With the emergence of molecular manufacturing, and still more with advanced artificial intelligence, the world economy will experience profound upheavals."
"More advanced systems would converge towards astonishingly fast speeds and high energy efficiency, would draw material feedstocks directly from the environment, and would construct any special facilities as needed. With the addition of advanced computation, say, equivalent to humanoid artificial intelligence, the process could be entirely automated."
That last quote sounds like a genie machine to me.
Mike Deering, Director,
http://www.SingularityAwareness.com
Email: deering9 at mchsi dot com
Posted by: Mike Deering | April 08, 2004 at 08:39 AM
"Mike (and others), what do you think of Eliezer's claim that (if I understand correctly) any SAGI that's not *very* carefully designed and trained will quickly and inevitably wipe us out as a side effect of doing the first thing it's asked to do?"
Chris, I am really glad you asked this question. A.I. and nanotech are inextricably linked. They are advancing in lock-step because the one cannot advance without the other. This interdependence means that they will maintain the same level of progress toward the final goals of molecular machinery and AGI. It is understandable that you would think that AGI is at least a generation away if you are not following closely the developments in A.I. research. Just as you would think that MM is at least decades away if you only knew what you read on CNN. Here is some stuff you should know about AGI research:
Biology is an existence proof of molecular machinery and computational intelligence.
AGI's like nanobots will not be reverse engineered copies of their biological counterparts, but rather original engineered designs using some of the same concepts used by biology and some wholly new concepts.
The design stage of development of AGI is at the same or more advanced level as the design of the assembler.
Both the assembler and the AGI are very complex design challenges that some people believe are beyond our intellectual capability. They are wrong in both cases.
Experts in both fields, Richard Smalley and Marvin Minsky, claim that they will not be developed for a long time if ever. They are both wrong.
AGI's do not have to be people. This is a very important point. There are many different cognitive architectures that can support intelligence, not all of them are conscious, self motivated, self serving entities. It is very possible to design a piece of software with general intelligence and problem solving ability without ego. This would significantly reduce the dangers of AGI. Even though this would remove the danger of the AGI taking over the world for its own purposes, it would not remove the danger of the person controlling the AGI taking over the world, or making a mistake in the use of the AGI if the AGI was significantly more intelligent than the user. If the AGI is significantly more intelligent than the user, the user might not understand all of the implications of the results produced by the AGI.
The combination of general purpose human-like reasoning ability and computer speed, complex accurate serial computation, data storage, and reprogramability will make human level AGI's automatically super-human level intelligences.
The leading contenders in the AGI race that I am aware of are Ben Goertzel's Novamente project and James Andrew Rogers secret AGI project. Both claim to be no more than twelve months away from a human-like reasoning working prototype.
If there is a secret government nanotech project, it is also a secret AGI project.
Posted by: Mike Deering | April 08, 2004 at 09:34 AM
David, my question was based on a recent chat conversation with Eliezer. I'd assumed his concern would be findable on his websites, but I've been unable to find it.
Mike, you say (heavily elided), "the danger of the person ... making a mistake in the use of the AGI if the AGI was significantly more intelligent than the user ... the user might not understand all of the implications of the results produced by the AGI ... human level AGI's [will be] automatically super-human level intelligences." Combined with the idea that a self-improving human-level AGI will rapidly become far more intelligent than humans, doesn't this mean that the consequences of an AGI will be essentially random from a human's point of view? And basically, anything we ask it to do will result in undesired side effects with impact proportional to the AGI's capability?
Posted by: Chris Phoenix, CRN | April 08, 2004 at 12:14 PM
I think, rather, that it means you'd have to be extraordinarilly careful how you worded commands. No grand, general tasks such as "End world hunger." Limited, albiet difficult tasks, such as, "Genetically engineer a new food crop having the following characteristics."
Posted by: Brett Bellmore | April 08, 2004 at 03:57 PM
Chris, not necessarily random from our point of view. No matter how smart it gets I don't think it will be incomprehensible to us the way that we are incomprehensible to dogs. I could be wrong, but I think there is a limit to intelligence. Once you have mastered general purpose reasoning techniques, logical, causal, algorithmic, computational, as we have, then even if it can perform at levels that we can't duplicate, we should be able to understand in general the reasoning method used and the SAGI should be able to explain the result to us in a way that makes sense. I don't think it is going to be able to evolve itself completely beyond our ability to comprehend its thought processes. We have mapped out enough of the algorithmic problem solving space to assume that there are natural limits to what is possible. Eliezer doesn't agree with me about this. I guess we will just have to wait and see.
What I was saying might be a problem is that many problems are very complex, and the solution suggested by an SAGI may also be very complex. The SAGI may understand all of the implications but we might not, at least at first glance. I would suggest asking a lot of questions about any course of action proposed by a SAGI. I'm not saying that it is useless to get solutions from a SAGI, just that you would need to be very careful.
Mike Deering, Director,
http://www.SingularityAwareness.com
Email: deering9 at mchsi dot com
Posted by: Mike Deering | April 08, 2004 at 11:13 PM
Mike, one general purpose reasoning technique is pattern recognition. But the pattern may well be in a space we can't comprehend. For example, suppose we could build a system that could spot patterns in 10 or 100-dimensional space. (I think liquid state machines can, in theory, do this.) Suppose we fed it lots of data, and said, "tell us what to do to optimize these economic parameters," and it recommended shutting down 10% of our paper factories for eight hours next Thursday. Would we have any hope of understanding what else would happen? I don't think so. Would this kind of tweaking be likely to have wild effects somewhere else? I do think so.
Posted by: Chris Phoenix, CRN | April 10, 2004 at 10:40 AM
Yes, perhaps we would get weird answers like this. I don't know. I think we would mostly get answers that seem more reasonable. What choice do we have? Not to build the SAGI? I think one thing we can do to be better managers of super intelligent tools is to increase our own intelligence. Nanotechnology enables many ways of doing this.
Posted by: Mike Deering | April 10, 2004 at 04:38 PM
Here are some interesting points on what previous quantum leaps in productivity have done:
http://www.jim.com/econ/chap07p1.html
http://www.jim.com/econ/chap07p2.html
http://www.jim.com/econ/chap07p3.html
http://www.jim.com/econ/chap07p4.html
Posted by: Tom Mazanec | January 17, 2007 at 12:58 PM