• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed

  • Powered by FeedBlitz

« On the Road in Ontario | Main | Artists in the Nanoworld »

April 25, 2007


Feed You can follow this conversation by subscribing to the comment feed for this post.


How about 2023? According to Robert Freitas in his latest interview it will take a least 16 years. So if you start now that makes 2023 the year of completion. The interview conducted by Michael Anisimov for the Lifeboat Foundation is available here .

Chris Phoenix, CRN

Freitas said, "Very roughly, our latest estimates suggest that an ideal research effort paced to make optimum use of available computational, experimental, and human resources would probably run at a $1-5M/yr level for the first 5 years of the program, ramp up to $20-50M/yr for the next 6 years, then finish off at a ~$100M/yr rate culminating in a simple working desktop nanofactory appliance in year 16 of a ~$900M effort."

First, note the qualifier: that plan makes optimum use of resources. It spends less than $1B over 16 years. And for the first five years, it spends a pittance. This is not the fastest it could happen - it's how fast it might happen with a stingy manager.

Second, it looks like Freitas is assuming that no other MM-targeted research will be done--that this program will have to do it alone. Over the past decade, the lack of (non-classified) MM-targeted research has been surprising (even given the politics). But almost equally surprising has been the untargeted but very useful progress in enabling technologies.

Look at what the Ideas Factory plans to accomplish for a few million dollars! That's not MM-focused, though it appears to have been inspired in part by MM. But if they do what they're planning, we'll have a very useful toolset for R&D of mechanosynthetic reactions... in five years or so.

Another factor is that the plans keep getting simpler. Back in '86, manufacture was thought to require vast numbers of cooperating navigating micron-scale robots. In '92, the robots were fastened down into a nanofactory. In the early 2000's planar assembly was designed, sharply reducing the mechanical design problems of nanofactory scaleup. There will probably be other advances that no one's thought of yet, which will make large-product MM even easier to do.

By 2015, I fully expect that we will be in a place where a five-year, sub-$1B program can pretty clearly develop a nanofactory--and where lots of people recognize that. It wouldn't surprise me if we were in that position by 2012 or even 2010.

And for some time now, we've been in a position where a sub-$10B ten-year program could develop a nanofactory. Were we in that position all the way back to 2000? Did any "Nanhattan project" funders step forward? We don't know.



Excellent post Chris. You really helped to clear up your side of the prediction. I am really excited to see which one comes first.

Some futurists that i have been reading are really hoping that we get AGI before MM simply because of the incredible risks that MM introduces. Do you think that humanity will be able to control this awesome new technology when it comes out? I know that is the whole purpose of the CRN, but will it be enough?



Humanity better hope so because it looks like MM is coming first...




Needless to say there are many many possibilities it is 1 May 2007, 5:27 p.m. as I sit at my computer contemplating possibilities. To the issue of whether strong artificial intelligence will arrive before MM I of course cannot say for certain, although it would appear the two are interconnected if we receive artificial intelligence before MM one would simply use the artificial intelligence to create a molecular manufacturing device. If we receive molecular manufacturing before artificial intelligence one would utilize the remarkable increase in capabilities of production within the computer hardware industry to produce a computer with unparalleled computational characteristics. This may grant artificial intelligence as many have said threw the use of a hardware solution. Although there may still be significant software problems with the potential artificial intelligence. This would in turn prevent the creation of strong AI.

We are left with only possibilities not certainties. To the issue of whether a well funded large research project by IE black operations has already begun this reeks of conspiracy. I for one would find it doubtful. With that said there are certainly groups of individuals around the globe looking at some part of the overall MM concept. One could describe these groups as trying to contribute to willing or otherwise the overall project of MM. Individually they may not see themselves as covert operations. But they could be described as such.

Michael Deering

With MM the engineering and design efforts are much lower to make harmless stuff like furniture, shoes, and houses, than dangerous stuff like EMP projectors, ecophagic biovors, and nanobots that take over your enemy's brain. With AGI just the opposite relationship exist. It is immeasurable harder to make a smarter than human artificial intelligence that doesn't end up destroying the world than to make one safe. This is what Eliezer has been screaming from the rooftop for decades. The odds of getting MM right are at least within the realm of the practical. Much of our experience with the dangers of WMDs, malware, and IP are directly transferable to our MM future, but AGI presents whole categories of threats that are totally unprecedented. In fact, the dangers of MM are a wholly owned subset of the dangers of AGI. Thanks to the work of CRN and others people at high levels of government, industry, and academia are talking about what kind of structures may be necessary to make MM safe. Despite the work of the SIAI no corresponding debate involving the world's power brokers exists for the dangers of AGI.


I don't buy the conspiracy theories either, not because things like that never happen in the real world; on the contrary, they do. Rather, I think the current global political situation does not create the pressure to pursue such things.

In the cold war there was a strange project started by the Russians that involved people psychically "viewing" remote locations in the US. This worried top pentagon officials enough that they created their own "remote viewing" program. I'm sure millions were spent on both sides.

Basically, in order to have a secret program to develop something that many consider impossible, you have to have a large political and economic rival that's willing to entertain unpopular ideas and jump starts the spending. For whatever reason, the Soviet Union believed in a lot of outside-the-mainstream ideas. Lysenkoism, etc.

Molecular manufacturing is at least several orders of magnitude more plausible, even to skeptics, than remote viewing or Lysenkoism, but without a strong rival nation perusing it, the western establishment will continue to ignore it for now and the immediate future.

Tom Mazanec

The theory of Generational Dynamics in history (which I find persuasive) expects this to change. And even now there is the war on terror. It is the job of the military to attempt to prepare for any contingency.

Chris Phoenix, CRN

Jonathan, complete control is basically impossible. I have hopes that we will be able to avoid running off a cliff.

An analogy is coming to me, that may be worth writing up: You are a child learning to ride a bicycle. You will, of course, fall off and scrape your knees. Your parents had better stock up on band-aids. As your skills develop, you may easily go fast enough to break bones. But the real problem is the highway... you'd better know what you're doing before you pedal that far.

Now imagine that your parents don't exist, and you're learning to ride the bike on your own. You can probably survive the scraped knees, and you might have enough native caution to avoid breaking bones. But if you don't know what a highway is about, you may think it's a great place to ride--until a car comes along and flattens you.

OK, so maybe it's not a good analogy. The point I was trying to make is that there are several classes of risk involved with MM. There are things that'll go wrong but can be dealt with. There are things we'd really rather not see. And then there are the catastrophic risks.

That's why I like to ask: If you knew that in five years, you'd have to walk a tightrope without a net, how soon would you start practicing?

I suppose that SingInst would say that GAI is like walking a tightrope without ever practicing...

I have to say that I don't have much faith in AI's ability to help us deal with MM. AI won't be miraculous, at least not at first; it will be hard to produce better results than we know how to program. If we're lucky, we won't get stronger-and-worse results than we know how to program.


The comments to this entry are closed.