• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« C-R-Newsletter #40 | Main | Global Security & Economics »

May 02, 2006

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451db8a69e200d834894d9653ef

Listed below are links to weblogs that reference Bottom-up Design Technology:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Hal

Even after reading this, it still seems ridiculous to suggest that such large scale devices can be built without error.

The article itself admits that radiation damage will produce an enormous error rate; for a device like this something like 10,000,000,000,000,000,000 atoms will be out of place. The confident assurance that reconstruction will not be an issue for "well-chosen structures" assumes without evidence that such structures can always be found. And the analogy to computers ignores the fact that errors are a very real part of computer functionality and require elaborate error correction codes to be built into the system; for now, in memory elements, and in the future, for processing elements as well.

The undeniable truth is that nanotech devices will be full of errors. The assumption that such errors can be dealt with easily or without much complexity is without foundation. Nanotech proponents sometimes use a supposedly conservative assumption that all errors render their subsystem completely unfunctional; but this is actually an extremely *optimistic* assumption. Errors in the real world will be variable, intermittent, and often nearly undetectable. Designing nanotech systems that can operate robustly in the presence of a wide variety of failure modes will present enormous and unprecedented challenges.

Michael Deering

Chris, your article is perfect! It shows exactly why it will be possible to build nanofactories that function without error. Of course there will always be those who don't understand digital concepts and persist in denial. Perfection in operation is definitely an achievable engineering objective and easier to accomplish in nanomachinery than any other device.

Atoms are perfect. Diamondoid is rigid. The functional environment is controllable, vacuum, temperature, radiation shielding. The last objection of the sceptics will probably be, "What about high energy cosmic rays?" which can be dealt with by a little bit of functional redundancy, and automated self repair systems.

NanoEnthusiast

When defective components ARE found, how will they be removed from the nanofactory? This seems like a non-trivial matter compared to the trivial matter of zeroing out some sectors of ram and trying an algorithm again.

Chris Phoenix, CRN

Hal, I don't know where you got your 10^19 number, but it's way high. It'd be more like 10^13 errors after a year's exposure to Terrestrial background radiation.

There is now evidence that diamond (even unterminated diamond!) can be built without reconstruction. (There's also the possibility that reconstructions can be a useful part of the fabrication process. It seems very likely to me that CVD diamond surfaces are reconstructed, yet realign themselves as the process progresses.)

The assumption that errors always render systems non-functional is used in a different context: to understand how much redundancy must be provided to keep a system functioning in the face of worst-case errors. Detecting errors is a different question.

Pinpoint errors in the product (introduced either during or after fabrication) are "beyond the scope" at this time. Use of mechanical and digital voting systems should deal with many of them. (A mechanical voting system might constitute a breakable shaft to each sub-component of an array, so that if one of them didn't do the same thing as the others, it would be taken offline by the breaking of the shaft.) There are enough different ways to design a product that I'm not too worried about dealing with product errors.

So let's look at fabrication errors. A single error that goes undetected--that doesn't cause a fabrication jam--counts as a product error. An error that causes a fabrication jam will of course be detectable. So what's left? There are a few things to worry about...

A fabrication station that makes undetected errors in each of its products could create enough correlated errors to break redundancy schemes that rely on randomly distributed errors. (Outside the atmosphere, cosmic rays pose a similar design problem.) Likewise, an error that was not detected at low levels, but caused assembly errors later on, could mess up a whole product.

But how likely are such errors? Remember that an assembly error involves an atom bonded incorrectly. In most cases, other atoms will have to be bonded to it, and those bonding operations will fail, leaving more atoms in the wrong place... easily detectable. A surface atom might be wrong without necessarily jamming the fabrication process, but surface atom placement can be detected by feel. And parts can be tested for functionality (including the functionality of assembly) before being assembled.

NanoEnthusiast, I calculated in my Primitive Nanofactory paper that about one-millionth of a nanofactory would fail per product made. At that point, you don't have to remove dead parts and machinery--just warehouse them in the factory. By the time it's made 100,000 products, I guarantee it'll be as obsolete as an IBM PCjr.

Chris

The comments to this entry are closed.