When and why will molecular manufacturing revolutionize the world? From a technical point of view, the answer has a few subtle but easily understandable aspects. Understanding those points will let us project from current and near-future technology developments, to understand how far we are from a molecular manufacturing breakthrough.
Over the next few weeks, I'll be writing a series of posts exploring the various aspects of fast takeoff. I'll be covering design spaces, product design, factories-building-factories, product performance, the economics of competing technologies, and whatever else seems necessary to understanding the difference between a cool technology and a revolutionary one.
By the time I'm done, CRN's new website design should be live, and I'll convert these posts into new content. (Yay!)
Here's a teaser: A very basic and primitive computer-controlled molecular manufacturing system might have a million atoms (or molecular building blocks). If 99% of those atoms can be placed by the system, then 10,000 atoms must be placed "by hand." That's a very large molecule, or a very large number of scanning probe operations. Probably, a system like this would not be revolutionary - too hard to build, to design, or both. But a system that could handle 99.99% of its atoms would only need 100-atom "inputs" per copy. That is quite feasible by today's standards. So a difference of less than 1% can make the difference between a laboratory demo and a revolutionary manufacturing system.
"Less than one percent"?
Don't insult my intelligence, please. That's two orders of magnitude, and I (and many others) would take this series much more seriously if you'd refrain from that sort of mathematical sleight of hand...
Posted by: Svein Ove | March 29, 2009 at 02:29 PM
It is two orders of magnitude. It is also 1%.
In the course of developing a molecular manufacturing system, there will be a time when it can build 5% of its structure: basically, no more than a proof of concept. There will be a later time when it can build 50% of its structure. Then, 90%, 99%, 99.99%, 100%.
I'd guess that, in many research trajectories, the first 1% will be at least as hard to develop as the last 1%. It may be that the hardest problems will be saved for last. Or it may be that most of the hard problems will be solved in the first 80%, and only a little bit of molecular CAD will be needed to get from 80% to 100%.
If I were talking about, for example, a purification process, then there'd be a huge difference between 99% and 99.99%. But I'm talking about finding designs in a design space.
In a design space, the number of potential designs is huge. The number of designs we know how to build is much smaller, so we will not be coming anywhere close to filling the space or using up the designs.
For example, if we need 10,000 "designs" or recipes to build a million-atom robot, and the design space contains 10^250 designs, then which is harder: finding the first 100 designs that work, or finding the last 100 designs when we already know 9,900 designs that work?
I stand by my 1%.
Chris
Posted by: Chris Phoenix | March 29, 2009 at 02:50 PM
Hmmm I am noting a lot of hostility on this and other wep sites in the last years. 10-15 years ago when I started reading and commenting on MM teck there was a lot of well “ I do not believe it “ now I am hearing a lot of upset people saying things like “ ok lets get on with it we should have already had this technology “. I wonder if this is the way people go through the acceptance of new technology.
Todd
Posted by: todd andersen | March 30, 2009 at 10:23 AM
Hm. Yes, I was probably too hostile in that post; sorry about that.
But while I appreciate your reasoning, I still think you were misrepresenting the problem. The first percent might be as hard as the last percent, but in the middle there's a lot of relatively simple work where one design can be applied to large parts of the full system. So going from 99-99.9% is, indeed, far harder than going from 98-99%; maybe not two orders of magnitude, but by no means linear either.
Also, Todd:
I never was one of the naysayers; it always seemed to me that biology itself made a great proof of concept, and even if we could just duplicate what biology did, that'd be revolutionary by itself.
But as time passed, I've gotten more and more skeptical that implementing molecular nanotechnology is a *good idea*. This site does a good job at presenting the downsides, but its proposed solutions seem incomplete and/or unlikely to result in a world I'd enjoy living in; as a result, I believe a breakthrough in AGI *first* would be more likely to.. not go horribly wrong.
(That's based on the assumption that AGI is feasible, of course, and that we'd end up getting it eventually anyway. It's scarcely any safer, but it is at least a potential stabilizing influence.)
Posted by: Svein Ove | March 31, 2009 at 01:47 AM
The question then is: Is MM going to be subject to exponential progress (like the human genome project: the first 1% took a long time, then the rest was done with exponential simplicity/repeatability), or will it follow the much harder 80-20 rule, where we will rather quickly (relatively speaking, of course) do the 80% that are easiest to achieve, and then stumble upon the remaining, hard 20%?
Posted by: Hervé Musseau | April 06, 2009 at 02:10 AM