• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« A Permanent Encyclopaedia? | Main | Empowering the Children »

November 16, 2005

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Jan-Willem Bats

Chris,

It's been known for quite some time that MNT can produce any product at a few cents per kilo.

But that is only the material aspect of products. The materials of which our products are made, are already dirt cheap. What we pay for is the information behind it. The way in which those dirt cheap materials are ordered to make a functional device. We pay for the time that a guy has spent designing the machine.

When MNT gets here, devices will still have to be designed by somebody. Won't we still be paying just as much for our products, even though the production of them is virtually for free?

The CRN Taskforce is looking into this, right? When might we expect a report from them?

Jay

Tom Mazanec

It will be adopted soon. There are too many potential players who will want it and be able to develop it on their own by 2020. It will be adopted by the end of the next decade, if we don't blow modern civilization in the meantime. Who gets it first is the big question.

michael vassar

"The consensus is that the basic technology will probably come into existence sometime between 2010 and 2020."

Come on Chris. The consensus among who? I suppose among the small set of experts most familiar with MM. However, to a substantial degree they have become so familiar with MM because they are outliers from a much larger group of experts who are less familiar with MM. The average member of this larger group sees MM as much further away and so doesn't bother to gain great expertise in it. The subset of experts who study MM communicate mostly with one another, as other experts are too ignorant to add much knowledge, but by communicating with one another they confirm their initial impressions and reach a sense of false consensus. This is actually an interesting and complex epistemological problem, but it can at least be examined somewhat by looking at the validity of estimated time-lines used by past experts in various technologies. Historically, such experts have many hits and many misses.
Part of what biases MNT experts may be their implicit use of normative decision theory rather than a more accurate empirical theory. There are numerous technologies that would clearly be very high impact compared to their cost of development and which could clearly be developed more rapidly and with greater certainty of success than can be honestly attributed to any nanofactory development plan. Despite this, MNT experts typically do not expect these technologies to be developed prior to nanofactories. The reason for this is that the expected utility of spending on these technologies is, although great, not as great as the expected utility of spending on nanofactories, so MNT experts expect nanofactories to be developed first despite their greater difficulty. MNT experts also have abundant empirical historical data suggesting that merely extremely promising technologies are typically not rushed into development as soon as their feasibility and economic desirability is demonstrated. However, those technologies are only of great utility. The expected payoff of investing in them may be 10 to 1, but it isn't 10,000 to 1. For this reason, the argument for investment in MNT seems so compelling to MNT experts that they are sure it will happen. I think that this attitude is backwards. Ordinary promising technologies are not ignored because they are not promising enough to draw attention, but because they promise enough to appear to be science fiction to most investors. Likewise, the development of MNT is likely recieve less funding than the development of less promising technologies BECAUSE of the high expected payoff. It will remain almost unfunded regardless of theoretical or experimental demonstrations of its feasibility, because the feasibility will ultimately be judged by people in control of money, and they will not be using technical models to judge feasibility. Instead they will be using intuitive economic models. The number one question they will be asking themselves is "if the payoff to this idea is so great, why hasn't someone else done it". This question will become more compelling the older the idea is, and MM is already an old idea. Investors will, with good historical precedent, trust the predictive power of markets more than the predictive power of engineers, especially because of the many historical instances of engineers being technically right about feasibility for products that were none-the-less unprofitable. (selection will ensure that investors who lack such attitudes will not end up controlling vast resources). As a result, most investors will continue not to fund MNT, while government funds will continue to be diverted to MNT irrelevant projects associated with well connected people.

Jan-Willem Bats

"The consensus among who? I suppose among the small set of experts most familiar with MM."

And that is exactly the only group whose opinion actually matters.

Jamais Cascio
molecular manufacturing systems will be able to manufacture a wide range of products direct from blueprints in an hour or so

Chris, I've seen you make this assertion before, and I'm a bit unclear on how it's derived. I'm not saying I don't think it's possible, I'm just saying that one could easily imagine MM systems working faster and MM systems working far, far slower. Is there a particular reason why you think they'll take about an hour?

("RealityCrafters: New Material Societies, In About An Hour" -- check your local nanomall)

michael vassar

jan-willem:
Did you actually read my post, where I pointed out why "those experts most familiar with MM" are NOT the only group who's opinions matter in great detail?

Karl Gallagher

Jamais wrote:
one could easily imagine MM systems working faster and MM systems working far, far slower

I'd expect the first prototypes to take a month or more to crank out 1 kg of product. In another 5 or so years they'll be able to do it in an hour. So we'll see the capability when they're curiousities, then try to adapt as they supercede their predecessors completely a few years later.

Tom Craver

Karl:
I've also speculated that the first prototypes might be slow. The likely result would be creation of "nanofac plates" - thin but wide (and very light) nanofactories that could produce another nanofac plate in maybe a day.

That would still allow for a fast replication rate - even if every unit only made 2 others, they'd be everywhere within a month. Demand would be high, as they could produce (or even emulate) electronic gadgets such as a tablet computer. But their slow mass production rate would keep them from being quite so dangerous.

However, I then realized that one could stack up several hundred of them 'fanfold' fashion - \/\/\/\/\/\/ - and have them push products out of the 'folds' as they build them up. That's a little harder, but it shouldn't take more than a year to get it working well enough to produce a copy of itself.

So within a year of getting nanofac plates, a fast 3D production method could become widely available, likely limited mainly by heat and power consumption and availability of designs.

Chris Phoenix, CRN

Michael, you quoted my sentence about consensus out of context; the meaning of "the" changes quite a bit, and makes my claim look less supportable than it really was. You take advantage of that to spend a few paragraphs arguing me back to admitting that I had asked only MM experts for the consensus--which was what I had stated explicitly in the previous sentence.

We have already seen at least two people--Jim Von Ehr and Mark Sims--start companies to develop molecular manufacturing. Both are working on enabling technologies rather than going direct for nanofactories, but that doesn't change the fact that their intent is to work directly toward MM. Five years from now, the multi-million-dollar level of resources that they're putting toward MM will be enough to fund much more of the total project--perhaps the whole thing. Unless you want to argue that work toward MM will *decrease*, you can't argue that it will receive no targeted investment.

Chris

Chris Phoenix, CRN

Tom: Once you can build mechanosynthetic systems that can build sheets of product, you're most of the way to a 3D nanofactory. For a simple design, each workstation produces a block of product (each dimension the thickness of the sheet) rather than a connected sheet. Then you have the workstations pass the blocks "hand over hand" to the edge of the workstation sheet. (In a primitive design, with much of the complexity in the incoming control information rather than the hardawre, each workstation would presumably have a general-purpose robot arm with enough reach to do this.)

*After* the blocks get to the edge of the sheet, they are added to the product. Instead of the product being built incrementally at the surface of V-folded sheets, the sheets are stacked fully parallel, just like a ream of paper, and the product is built at the edge of the ream.

The product 'extrusion' speed will be limited by three things:
1) The block delivery speed. About 1 meter per second. No limitation.
2) The speed of fastening a block in place. Even a 100-nm block has plenty of room for mechanical fasteners that can basically just snap together as fast as the blocks can be placed.
3) The width (or depth, depending on your point of view) of the sheet: how many workstations are supplying blocks to each workstation-width edge-of-sheet. The width of the sheet stack is limited by the ability to circulate cooling fluid, but it turns out that even micron-wide channels can circulate fluid for several centimeters at moderate pressure. So you can stack the sheets quite close together, making a cm-thick slab. With 100-nm workstations, that will have several thousand workstations supplying each 100-nm-square edge-of-stack area. If a workstation takes an hour to make a 100-nm block, then you're depositing several mm per hour. That's if you build the product solid; if you provide a way to shuffle blocks around at the product-deposition face, you can include voids in the product, and 'extrude' much faster; perhaps a mm per second.

Jamais: There is indeed a reason why I say an hour. On the lower side, it doesn't much matter if it's faster than that, and it becomes less plausible as I choose faster numbers. Bacteria can reproduce in 15 minutes (900 seconds). Scaling laws suggest that a 100-nm scanning probe microscope can build its mass in 100 seconds. An hour is certainly fast enough; that's a sixteen millionfold increase in manufacturing capital per day, if you want it.

On the higher side, the first nanofactory can't very well take much longer than an hour to make its mass, because if it did, it would be obsoleted before it could be built. It goes like this: A nanofactory can only be built by a smaller nanofactory. The smallest nanofactory will have to be built by very difficult lab work. So you'll be starting from maybe a 100-nm manufacturing system (10^-15 grams) and doubling sixty times to build a 10^3 gram nanofactory. Each doubling takes twice the make-your-own-mass time. So a one-hour nanofactory would take 120 hours, or five days. A one-day nanofactory would take 120 days, or four months. If you could double the speed of your 24-hour process in two months (which gives you sixty day-long "compile times" to build increasingly better hardware using the hardware you have), then the half-day nanofactory would be ready before the one-day nanofactory would.

In my "primitive nanofactory" paper, which used a somewhat inefficient physical architecture in which the fabricators were a fraction of the total mass, I computed that a nanofactory on that plan could build its own mass in a few hours. This was using the Merkle pressure-controlled fabricator ("Casing an Assembler" http://www.foresight.org/Conferences/MNT6/Papers/Merkle/) with a single order of magnitude speedup to go from pressure to direct drive.

So I think that building its mass in an hour is within an order of magnitude of being right.

Hm, I think I just wrote my next newsletter science essay.

Chris

Jan-Willem Bats

Chris,

Could you answer my question as well? It's the first post.

Thanks in advance.

Jay

Chris Phoenix, CRN

Jay,

"The materials of which our products are made, are already dirt cheap. What we pay for is the information behind it."

We also pay for the labor to build it and the machines to process it. I've heard that the electronics in a car cost $1000. That's the hardware, not the software. You could argue that electronics embody information, but they also embody massive fab costs.

Also, when the cost and time to build a prototype each drop by three orders of magnitude or more, I expect the overall cost of design to drop by two orders of magnitude or more. You can work far faster when you don't have to be afraid of wasting thousands or millions of dollars with each mistake.

And there are little things that will make design easier, like the ability to massively overdesign, and not having to leave space for active components (because they'll be a million times smaller, and also easier to distribute in many cases).

Chris

michael vassar

Sorry for not including relevant context. Trouble with written media. I still think my point stands. Jim Von Ehr and Mark Sims might develop MNT with their own resources within two decades, but that seems like an aweful lot to expect. I would actually bet a lot of money against it.
I don't see a compelling argument that 2 months is enough time to double nanofactory speed. Even if it takes 4 months to make the first 1kg, it only takes 2 more to make enough factories to replace almost all existing captial, so it doesn't matter.

Tom Craver

Chris:
My "fan-fold" approach assumed a very early nanofactory capability. Likely the first macroscale product of the first working atom-precise assembler capable of copying itself, will be a simple planar grid of assemblers on a prepared substrate.

Your "primitive" nanofactory has at least the following added developments:

- Designs for a useful variety of snap-together nanoblocks.
- Designs for assembly workstations optimized for producing nanoblocks. (At minimum it has to be able to hold a block during assembly.)
- Specialized assemblers for manipulating and placing nanoblocks (or programming to coordinate atom-placement workstations to jointly manipulate blocks).
- A nanofac system architecture, laying out workstations, block transport and queuing space, feedstock supply, etc.
- Basic control software (an operating system) to coordinate all elements in a nanofactory.
- A design compiler to generate the specific sequences of operations to make a specific product based on specific nanoblocks of different sizes/types on a specific nanofactory architecture running a specific operating system.

Even assuming a lot of pre-breakthrough work with simulators, I'd expect a year or two to pass from the first self-copying workstation (and planar grids of same) to the first useful nanoblock factory.

Chris Phoenix, CRN

"Planar grids of same" implies that you can make multiple copies of the fab system, that new designs can be rapid-prototyped, and that multiple teams can work at once on different aspects. I also assume you'd have at least a rudimentary readout/feedback capability: try to move an arm through a trajectory and see if it hits anything.

I agree there'd be some debugging to do. But I don't agree that it would take a year. I think I've worked on comparable problems as an embedded software engineer. This seems actually easier than some classes of software bug, because there's more state preservation, more time-linearity, and less action-at-a-distance: you don't have race conditions and timing glitches, and you don't wipe out random memory locations when the program crashes.

With minimal pre-design, it could take a year to get a good design working. But with some understanding of how to adjust parameters to perform engineering in a design space, the debugging should basically just be adjusting the model and then adjusting the design.

A nanofactory, unlike modern software, should be simple enough to fit completely inside a simulator and inspect the operation of any piece forward and backward in time. (Of course, this is only possible because it will contain so many identical pieces.) Of course, if the model is inaccurate, the simulation won't reflect reality. But this should be quickly detectable, and targeted research should make the model converge rapidly, since it will be physically plausible right from the start and only needs to have its parameters tweaked, not be redesigned.

Chris

Chris Phoenix, CRN

Michael: I agree that a nanofactory with four-month bootstrap time would be revolutionary. I'm not guaranteeing that a twice-as-fast nanofactory could be developed in two months, though it seems likely. My example would have been stronger if I had chosen an order of magnitude longer bootstrap time (taking ten hours to make its own mass). It seems pretty likely to me that a slow primitive nanofactory could be sped up by a factor of ten in twenty months. The effort might even get a boost from the fact that once a nanofactory bootstraps to microgram scale (halfway to kilogram-scale), it can make world-class supercomputers.

Chris

Chris Phoenix, CRN

Sorry, forgot to answer your other point. I'm not saying that Sims and Von Ehr could develop MM soon. Mark Sims isn't even trying to develop, just to enable/advance development.

I'm saying they already have invested millions, even at this early date. As the total development cost shrinks toward a few million, and it becomes more obvious that MM will work as advertised, we'll see more investors, with more money.

I, and those I polled, generally agree that Moore's Law is a good estimate for cost reduction. Estimates of cost for a crash program today run around $20-100 billion IIRC. (I suspect it may turn out to be less; there hasn't been nearly enough thought given to cheaper development pathways.)

By 2015, a crash program would be well within the reach of quite a few individuals. And considering how rapidly our argument has solidified over the last few years, and how much easier it will be to develop designs at all levels with faster computers and better software and more concrete nanofactory goals, I expect MM to be an obviously workable goal by 2010, let alone 2015.

Chris

Tom Craver

Chris:
Minor points - each doubling of a 1 hour assembler only takes 1 hour if all of them can stay active. The simplest approach is probably to make them in two facing planes, and shift the planes between doublings. So 60 hours instead of 120.

Also, if you get a 1-day doubling assembler, and then need to spend 2 months making a 1 hour doubling assembler, you can expand the day-doubling nanofactory for those 2 months, and then use it to produce the 1-hour MNT in 24 hours instead of 60 hours.

Chris Phoenix, CRN

Tom, good points; good thinking. Thanks. I think these points don't weaken my point about why a 1-hour nanofactory is likelier than a 1-week nanofactory, and they do strengthen my point about how quickly nanofactories can be bootstrapped from a primitive starting point.

Chris

The comments to this entry are closed.