• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed

  • Powered by FeedBlitz

« Nanotechnology High School | Main | NIAC Funds CRN Study »

September 10, 2004


Feed You can follow this conversation by subscribing to the comment feed for this post.

Richard Jones

Of course, just because I'm sceptical about one of another sceptic's arguments for being sceptical doesn't mean I'm any less sceptical about MNT myself. I have my own reasons for scepticism (which are described in my book Soft Machines).

It's as well to remind ourselves that the original and proper meaning of scepticism refers to an attitude that submits both sides of an argument to careful scrutiny. That's just critical thinking, something that I think is indispensible, though it's often in short supply.

Chris Phoenix, CRN

Richard, I did note in the post that you're a skeptic. But you're skeptical that it's worth doing, not that it's possible. That gives us a basis for communication.

Something I've been meaning to ask you: Given that graphite is a very strong material, and given that graphite sheets of more than 200 carbon atoms have been synthesized with wet chemistry, why is it that life never discovered graphite?

As I understand your arguments, one reason you're skeptical is that we claim device performance substantially better than biology, and you don't think we can out-engineer biology so easily. But that argument depends on the assumption that biological evolution has been able to search all of the design space that is accessible to us. It looks to me like evolution never even tried graphite.


Brett Bellmore

I think the fundamental problem with the "you can't do better than biology, because evolution has already optimized living organisms" meme, is that evolution is a hill climbing algorithm, and such algorithms have a well known tendency to get trapped in local maxima.

The classic example of the inability of evolution to trancend local optimization, would be the retina of the eye: The human eye has the light sensitive cells at the back of the retina, and the data transmission cells, along with the blood supply, at the front. Resulting in a hole in the retina, the "blind spot", among other problems.

We know from independent evolutions of the eye, that this design defect is in no way necessary. But many millions of years of evolution have not cured it, because there isn't a path to that cure which doesn't involve intermediate steps that are seriously worse.

Any critic of "intelligent design" theory could cite a dozen such examples off the top of their head, by the way.

I view nanotechnology as a "restart" of the algorithm, near the base of a much higher hill.

Richard Jones

Chris, your question is interesting and I will answer it in three parts.

Firstly, I don't think that biology has solved all problems it faces optimally - it would be absurd to suggest this. But what I do believe is that the closer to the nanoscale one is, the more optimum the solutions are. This is obvious when one thinks about it; the problems of making nanoscale machines were the first problems biology had to solve, it had the longest to do it, and at this point the it was closest to starting from a clean slate. In evolving more complex structures (like Brett's eye) biology has to coopt solutions that were evolved to solve some other problem. I would argue that many of the local maxima that Brett rightly says evolution gets trapped in are actually near optimal solutions of nanotechnology problems that have to be sub-optimally adapted for larger scale operation. As single molecule biophysics progresses and indicates just how efficient many biological nanomachines are this view I think gets more compelling.

Secondly, and perhaps following on from this, the process of optimising materials choice is very rarely, either in biology or human engineering, simply a question of maximising a single property like strength. One has to consider a whole variety of different properties, strength, stiffness, fracture toughness, as well as external factors such as difficulty of processing, cost (either in money for humans or in energy for biology), and achieve the best compromise set of properties to achieve fitness for purpose. So the question you should ask is, in what circumstances would the property of high strength be so valuable for an organism, particularly a nanoscale organism, that all other factors would be overruled. I can't actually think of many, as organisms, particularly small ones, generally need toughness, resilience and self-healing properties rather than outright strength. And the strong and tough materials they have evolved (e.g. the shells of diatoms, spider silk, tendon) actually have pretty good properties for their purposes.

Finally, don't forget that strength isn't really an intrinsic property of materials at all. Stiffness is determined by the strength of the bonds, but strength is determined by what defects are present. So you have to ask, not whether evolution could have developed a way of making graphite, but whether it could have developed a way of developing macroscopic amounts of graphite free of defects. The latter is a tall order, as people hoping to commercialise nanotubes for structural applications are going to find out. In comparison the linear polymers that biology uses when it needs high strength are actually much more forgiving, if you can work out how to get them aligned - it's much easier to make a long polymer with no defects than it is to make a two or three dimensional structure with a similar degree of perfection.

Chris Phoenix, CRN

Richard, first your final point: strength is determined by bond density, until you get to really high strains where failures propagate. It's hard to get high bond density in linear polymers. Graphite has extremely high bond density. Graphite-with-defects may be less tough than polymers (I don't know), but you'll have a hard time convincing me it's less strong. And you'll have a hard time convincing me that in macro-scale organisms toughness is always that much more important than strength.

For example, they recently found snails that had rapidly evolved magnetite shoes; I think the phrase was "evolutionary instant." Water-deposited minerals, and composites of minerals and protein, are very accessible to biochemistry. But would you care to argue that graphite is accessible to biochemistry, and no organism has ever found it evolutionarily useful?

Your point about nanoscale systems having had the longest to evolve is a good one. I agree with it enthusiastically--because it supports my point that evolution has not searched the entire space yet. It may be that in another 20 billion years, some large animal would learn to make graphite and would stop suffering fallen arches. And it may be that if we want to build a machine the size of a large animal, graphite is a better material to use than protein.

Now that we've established that evolution has not searched the entire space, let me suggest a particular area that it has been completely unable to search. That area is vacuum. Evolution may have found the best solutions for aqueous chemistry. (Or it may just have found the best solution for near-boiling anaerobic aqueous chemistry with lots of nucleic and amino acids available in the environment.) And it may have found the best solution for cellular organization. (Or it may just have found the best solution for a non-sterile environment.)

As long as we're talking about life, I would argue that linear polymers are favored because they're easy to disassemble; this allows efficient metabolism. It may be that a nanoscale organism that continually has to self-repair and reconfigure is better off with weaker materials simply because it takes less energy to rearrange them. Now, if you want to argue that metabolism is a necessary part of any maximally useful machine, go ahead--but I don't think you'll find many takers. So perhaps a 2D or 3D polymer would be better for constructing things that are used for a fixed purpose until they break--the energy cost of recycling would be higher, but depending on the application, the reduction in bulk may make up for that. (How long have organisms been flying? Do you really want to argue that birds would have no use for carbon fiber?)

But back to vacuum. You've answered this in the past with rhetoric about how diamondoid nanomachines may only be suited for outer space. Let's try to move past that this time. First consider that many useful machines, including computers, need have no sliding seals between the inside and the outside. Then consider the number of ways in which large sticky molecules can be kept away from areas you don't want them. To be viable, a 10-kg nanofactory must intake perhaps 1000 kg of chemicals over its lifetime. Assume 1 ppm of large sticky impurities, or 1 mg. The surface area of the nanofactory's inputs is about 10^22 square nanometers. 1 mg of large (say, kilodalton) molecules is 6x10^17 molecules. So a filter of nanometer-sized pores covering the production modules might expect to find at worst 1 in 10,000 pores clogged after ingesting 1000 kg of feedstock. So the nanofactory can take in chemicals without the intakes, either filters or sliders, getting jammed by contaminants. A 10-kilowatt (1 cm^3, and that's being generous) block of nanoscale motors outputting their power on a common shaft may have only a few millimeters of sliding interface to be kept clean by stacked seals. Seems worth the engineering.

So if large vacuum-filled machines can be built and operated without undue maintenance, the question is whether the performance is worth it. The answer here is obviously yes. In vacuum you can get much higher speeds, and thus power densities, than in water (as implied by the 10 kW 1 cm^3 motor; actually, the max power density is somewhere around 10^15 W/m^3). Also, perfect cleanliness makes it much easier to engineer extremely low-friction bearings.

Again, this is for machine applications: fairly simple, modular function. No distributed metabolism, very limited homeostasis, limited or no self-repair (fault-tolerance is almost as good, and a lot easier), no immune system.


Richard Jones

Chris, you need to read "Strong Solids" by A. Kelly (OUP, possibly out of print but a standard text in many libraries and still absolutely sound) to see why theoretical arguments about the strength of materials based on their bond energy densities aren't relevant in the real world. The key point is that you are right to say " until you get to really high strains where failures propagate" but what's important here is not the global strain applied macroscopically, but the local strain near a defect, which is always much larger. As for the strength of polymers, check out how strong Kelvar is (no figures to hand as I'm away from the office), and compare that to carbon fibre, (which of course is the commercially available form of structural graphite). Kevlar has a fairly complex monomer unit but that isn't actually directly important here; what's important in making it strong is simply that all the molecules are straight and aligned in the direction of the tension. Even simple polythene is just as strong, if you only align the molecules, and indeed a commercial high strength fibre - Spectra (originally commercialised by Allied Signal) - is just that, simple polyethylene made by a process that straightens and aligns the molecules. In fact, if you look deeply into the way spiders process the proteins they use to make silk there are actually very strong analogies with the way Kevlar is made.

I'll address some of your other points later...

Richard Jones

Evolution is a very efficient way of searching vast volumes of configuration space (to see how vast these are just ask yourself how many 100 amino acid different protein sequences there are). But of course, what it measures is fitness, which has to be defined with respect to some particular environment. As such the biology we know about is only optimised with respect to one rather narrow range of environments, with that I absolutely agree with you. As I suspect that life has evolved many times in the universe I'm quite prepared to believe that somewhere evolution has produced a technologies optimised for all kinds of other environments too. And indeed we should think of evolution as a tool which we should be using when we develop new technologies too; I touch on this a little in my book but it's a very interesting theme that should be further developed.

As you say, I have said in the past that MNT might well be useful for outer space. This clearly isn't just rhetoric, as the source of your latest tranche of funding should make clear! But I've also, more recently, discussed the origins of my aversion to high vacuum, and I'm prepared to accept that if the pay-off is big enough then the cost and trouble of vacuum may be worth it. That's really an economic question rather than a scientific one.

Richard Jones

Chris writes: "It may be that in another 20 billion years, some large animal would learn to make graphite and would stop suffering fallen arches. " But some large animal has! Evolution took a slightly circuitous route to get there, involving the evolution of consciousness, problem solving ability and cultural memory, but it did the job.

An only slightly flippant response from someone who's happy to be part of the natural world.

Richard Jones

Chris writes: "It may be that in another 20 billion years, some large animal would learn to make graphite and would stop suffering fallen arches. " But some large animal has! Evolution took a slightly circuitous route to get there, involving the evolution of consciousness, problem solving ability and cultural memory, but it did the job.

An only slightly flippant response from someone who's happy to be part of the natural world.

Richard Jones

Sorry about the double entry there, a slightly flakey wifi connection to blame.

A final remark - since, Chris, you are so keen on graphite, why do you think it is that although technically it would be quite possible for cars, commercial airplanes and skyscrapers to be built from graphite, they usually aren't?

Chris Phoenix, CRN

Richard, as you remark, the question of fitness for a particular application is an economic one. But the economics of nano-built nano are not intuitive. For example, the last 1% of engineering closure reduces cost-per-feature by many orders of magnitude.

Today, a lot of products come with a high-tech tax: they can only be produced with expensive machines. That (plus inertia) is why we don't build things out of graphite. What happens when the high-tech tax disappears? We will have graphite (or buckytube) cars and planes.

You didn't just say MNT will be useful for outer space; you implied it wouldn't be useful terrestrially because it'd be too hard to use it in this environment. That is the argument I was objecting to. My nanofactory was recently criticized on the grounds that it would have to be placed inside a refrigerator to avoid stress on the cooling system! That argument came from a rocket scientist, so I hesitate to call it silly, but it was certainly irrelevant. I would put your implication that nanofactories will be less useful on earth than in outer space in the same class.

I agree that evolution is a useful way to find solutions in a domain. What I disagree with is the idea that the historical domain of biological evolution (linear organic polymers) encompassed the best solutions for nanoscale machinery. Note that machinery is quite different from life. It can be modular on a much larger scale. It can be far more special-purpose. It exists not within an ecosystem but as part of a human plan.

Explain to me why life almost never uses electric current. It uses charge separation (nerve transmission, chlorophyll, molecule shape, etc), but never actually ships electrons to distant locations (except in electric eels). I claim the answer is that it's hard to build good insulators. But if you criticize machine nanotech for not making more use of Brownian motion, shouldn't you criticize biology for ignoring electricity?

On material strength, it's true that defects cut strength. And in some materials like glass, a defect causes concentration of force to cause very high strain near a defect, so a tiny nick in a glass fiber can drastically weaken it. But this can be much less of a problem for anisotropic materials. Aligned-chain polymers like Kevlar are extremely anisotropic. Are you assuming that nano-built diamondoid or graphene materials will have to be isotropic? On any scale above a few nm, they won't.


Karl Gallagher

On the refrigerator comment--I was trying to describe the basic infrastructure needed to support the nanofactory. A cooling system that can keep the factory ice cold while sitting in desert sunlight certainly can handle it not being insulated. But that means you're signing up for a much more powerful cooling system, which means you've extended the time for replicating the nanofactory and its necessary support equipment. The high-end cooling system you described would take a lot longer to build than a simple refrigerator so taking that approach slows down your exponential production.

Richard Jones

Chris asks 'Explain to me why life almost never uses electric current. ' It's a fascinating question, that I address in chapter 7 of my book, where I do pretty much what you suggest and criticise biology for not using it. Well, criticism is a rather inappropriate word in this context, but I do ask why coherent electron transport was developed in the context of photosynthesis but then never used for anything else, so that molecular electronics, which seems on the face of it to have been possible for biology, was never developed. I guess you must have been so cross with me by then that you weren't actually reading what I was writing.

Don't forget about dislocations too when thinking about why materials don't have their ideal strength.

I don't think any economics is intuitive, let alone nano-economics. What is this high tech tax of which you speak? It is essentially the interest payments on the capital you have used to do the R&D to develop the processes and equimpment needed to make the goods. Since the development of MNT would require large R&D expenditure, then the same will apply to its products.

Chris Phoenix, CRN

Richard, I just re-read chapter 7 ("Wetware"), and skimmed it twice more to make sure, and saw no mention of the electronics of photosynthesis and only a brief and non-critical mention that neurons are slower than coax. If it's there, it's well-hidden.

If biology can be criticized for not using electronics, then we must ask the question: will reduced use of Brownian motion, but full use of electronics, be a good tradeoff?

Dislocations can be lumped with defects as weakening a region of the material. Again, in an anisotropic material they don't have to have much effect on strength as long as they don't extend beyond a single fiber.

The development of MNT would indeed require large R&D expenditure. But the building of additional nanofactories would require neither R&D, nor labor, nor excessive materials or energy cost. And if lots of nanofactories were built at low incremental cost, the value of their products would dwarf the original R&D cost.

So if the nanofactory developers are smart, they'll give away nanofactories and product design software, and take a percentage of the purchase price of each manufactured product. Within a few years they could have an income of a significant fraction of the world's GDP. (Note that this specifies that products can be made "open source". A less smart developer would try to avoid that, trying to get a cut of every product made. A smart one would know that open source is a huge fount of creativity and only a minor drain on sales due to the fact that geeks don't care about user interface.)


Chris Phoenix, CRN

Karl, a nanofactory presents about a square meter to the sun, absorbing a kilowatt if it's painted flat black in the desert. This is insignificant compared to the 150 kW internal power use.

Yes, the cooling system is powerful. Lessee... Cool Chips claims you can get 100 W of cooling in a square centimeter
(http://www.coolchips.gi/slides/hvac/frame04.html note 1).
So to cool 200 kW would require 2,000 cm^2 or 1/5 square meter. They say their chips will be 55% of Carnot efficiency, so it'll take 0.67 W additional power per W of cooling. That's a bit more than I figured, but very doable. They say their chips will be a few mm thick, but that's just because of clunky materials; there's no fundamental reason why they can't be a few microns thick. In other words, the mass of the cooling machinery itself can be negligible.

Now I'm going to stop doing your homework for you, and assert without calculating that the pumps and pipes for an evaporative cooler will not weigh much either.

This is what I keep trying to get through to you guys: with advanced diamondoid manufacturing, you can't just assume that something will be hard. Usually it won't be. You'll be right far more often if you try to prove to yourself that it will be easy. I've been doing this for fifteen years, so I can usually get away with intuition on what will be easy. You will usually have to prove it to yourselves. But I'll say it again: Don't try to prove it's hard; try to prove it's easy, and you'll be more creative and more often right. Once you've thought of thermionic refrigeration, then you can apply hardheaded engineering to see if it'll work.


Richard Jones

Chris, my apologies, the bit about photosynthesis I was thinking of was at the beginning of chapter 8, where I explicitly point out the curiosity of the fact that nature uses coherent electron transport only to transmit energy, not information. Actually I don't really develop the theme in the book very far, but my speculation as to why this is is because the scaling of diffusion with distance means that it's very effective on small scales. So chemical computing is a very good design choice for bacteria, but a bad one for humans, who are stuck with the ingenious but ramshackle adaptation of chemical computing to long distances that our neurons are.

Your comment "Again, in an anisotropic material they don't have to have much effect on strength as long as they don't extend beyond a single fiber." is simply not true; to see this you only have to compare the actual strength of commercial carbon fibres (which are highly anisotropic) with the theoretical strength of graphite; it's orders of magnitude less because of the dislocations, stacking faults and other defects that their manufacturing process leaves them with.

Brett Bellmore

Commercial carbon fibers ARE anisotropic, but they're not polymers. Rather, as I understand them, they're an anisotropic structure derived from the parent material's anisotropy, superimposed on an isotropic structure at the molecular level. And since the scale of the anisotropic structure is much larger than the dislocations, you have the opportunity for easy propagation of the dislocations over substantial distances even if the propagation is eventually terminated by the large scale anisotropy.. In a material which was anisotropic at the molecular level, such as an array of nanotubes, dislocations couldn't easily propagate.

Richard Jones

Carbon fibre isn't isotropic at any level. The best way to picture it is to image a pile of pieces of newspaper, somewhat ragged and torn, rolled up into sausage. This means you are always pulling in the plane of the graphite sheets. A carbon nanotube, by contrast, is a single sheet rolled into a perfect cylinder. A typical strength of a carbon fibre is 3.5 GPa. Estimates of the theoretical strength of a carbon nanotube vary from about 50 to 200 GPa, but the best that's been got in practise for a monofilament nanotube composite so far is about 1 GPa.

Chris Phoenix, CRN

Come on, Richard. I didn't say every anisotropic solid would approach theoretical strength. I said that a single defect didn't have to have much effect on overall strength as long as it didn't extend beyond a single fiber. Obviously in a material as full of defects as carbon fiber, every fiber will have defects, and the material will be weak. In an MNT-built structure, you can have far less than one defect per fiber, and the material (if it's designed well) can benefit from most of the theoretical strength of the perfect fibers. And in a perfect fiber, covalent bond density does matter; so a well-designed well-build material based on buckytubes should be quite a bit stronger and tougher than Kevlar or spider silk.


Chris Phoenix, CRN

Chemical communication is good for systems satisfying all the following conditions:
1) nanoscale
2) in water
3) broadcast communication
4) slow (~1 sec to diffuse 10 microns)

When we build nanotech digital computers, only the first of these conditions will hold. So we will want to invent a new system, one that evolution never looked for. There are surely systems better than rod-logic, but we already know that rod-logic can do computation many orders of magnitude better than today's computers. And we know that biomimesis probably won't help us design powerful digital computers--that approach may not even beat today's lithography-built computers.

Similar arguments can be advanced for actuators and for structure.


Karl Gallagher

Replying to Chris' post above (9/16 8:10):

Looking at thermionics--doesn't look proven to me. Great stuff if it works, but if it doesn't does the whole nanofactory concept collapse? No, you just have to use some tech that does work, and you should budget for it. There shouldn't be more than one breakthrough on the critical path, that's asking for disaster in a real project. In this case it's asking to be dismissed as an unrealistic handwaver. Given that you put zero as the mass of the cooling system (and other support systems) when calculating the reproduction time for the nanofactory I've still got doubts.

You'll be right far more often if you try to prove to yourself that it will be easy.

I'm not going to take an optimistic attitude looking at this stuff. Optimistic engineers destroyed Challenger and Columbia. Optimistic engineers burned people to death in Pintos. It is immoral for engineers to be optimistic. We are obligated to contemplate all the things that can go wrong and prove something will work safely.

You will usually have to prove it to yourselves.

I'm not taking on the burden of proof here.

Let's face it, Chris, you're asking a lot of us.

You want working scientists and engineers to give up safer career opportunities to work on MNT.
You want investors and gov't agencies to put their money into MNT research.
You want policy-makers to leave off worrying about wars, budgets, and elections to decide how to handle MNT.

They're not going to do it unless they see proof. You bear the burden of proof to show that this stuff can have a real impact. Optimism won't cut it. They'll send an engineer to look over your work. If he comes back and says "They neglect this major factor, they assume that thing has a mass of zero when it's got to be at least 20% of the total system, and I can't see the deployment working at all" you won't get an appointment and your issue will be off their agenda. Right now the deck is stacked against you even worse because we just went through a big wave of optimists saying "trust me, this will work out in the end" and a lot of people got burned.

I'm not asking you to do my "homework." I'm offering you a chance to convince me that the "big step" of exponential production can happen. If you're too busy, no problem, I've got other stuff to do too. But if you can't convince me you're going to have a hell of a time convincing the people you need to convince.

Brett Bellmore

"Optimistic engineers destroyed Challenger and Columbia."

I think that was optimistic managers, actually.

"Optimistic engineers burned people to death in Pintos."

Hey, I work in that industry, ok? The Pinto was NOT an unusually dangerous car. ALL cars have their weak points. What you had there were engineers making a cost benefit analysis, and a jury being outraged that they didn't place an infinite value on human life in the analysis. "Optimistic" engineers would have assumed that the car wouldn't get in collisions. The Pinto engineers assumed that a certain number of deaths in collisions would be acceptable, given that they couldn't build a perfect car anybody could afford.

Chris Phoenix, CRN

Karl, you shifted the sense of the word "optimistic." I was using it in a purely technical sense: Can this technology accomplish that performance?

To shift to talking about risk analysis, then accuse me of being too optimistic, then claim that the less optimistic thing to do is to ignore a claimed risk until it's proved, is broken on many levels.

I'm not trying to convince you to invest in a molecular manufacturing company. I'm trying to convince you that it is plausible that this stuff will work well enough that we should think about planning ahead for it. Are you seriously arguing that no one should spend any attention on it until every detail is proved? That's not responsible at all.

At the moment, we have a bunch of estimates based on theory. They all say that MM will be really, really big. We have a bunch of skeptics based on emotion and politics. They all say we should ignore it. You're now sounding a lot like the politicized skeptics. Theirs is a stupid position, and you're not stupid. What are you reacting to?

By the way, I did not say that thermionics was the only way refrigeration could work. There's also sonic refrigeration: one moving part, no sliding interfaces. Just put 180 dB (?) of sound into a tuned cavity, and one part gets hot while the other gets cold. I haven't looked into this because I've never before seen a suggestion that thermionics won't work. What's your basis for that? Just playing devil's advocate?

There's at least one other refrigeration technology that hasn't been mentioned yet. How much do you want to bet that all four technologies either won't work or will increase the mass of the nanofactory system by more than 5%?

Suppose someone came up with a completely new jet engine technology. Theoretically, it was great. It hadn't yet been demonstrated, and there were some questions about the fuel.

Now if that person went to the President and said "We should pour money into this because it will give us massive military superiority, because it will perform 80% better than today's tech," I would agree that that was premature.

But if that person went to the President and said, "We should study this further because it may be up to 80% better than today's tech once the bugs are worked out, and the Chinese and Indians and Russians are probably already working on it and we're not," I would say that was not premature at all. Would you?

I don't want people to give up their careers to work on MNT. I want people to STOP telling their students not even to read Nanosystems.

And you did take on the burden of proof when you published a criticism of my paper. That criticism was unfounded or actually wrong in almost every detail. You asserted that the refrigerator would weigh too much to allow rapid replication. Prove it, or retract it. You asserted that the small parts would be terribly vibration-sensitive. I pointed out that their resonant frequencies will be in the GHz. Answer me, or retract it.


Karl Gallagher

What are you reacting to?

Your abstract, which says the transition from the first assembler to a flood of working products will take "weeks." That's a huge discontinuity in the growth of technology, and that's your justification for pushing for policies to be put in place for MNT before even prototypes are developed. You point to the nanofactory paper as proof that it is that urgent. I don't buy it.

You describe the timeline for producing a block while neglecting assembly time. I think it's going to be greater than 10% of the time, and could be worse depending on how fast the parts can actually be moved.

You describe the number of cycles needed to duplicate a factory but neglect the time to produce the support equipment. Regardless of how much the power plant, cooling plant, etc weigh, they're not going to have a weight of zero.

You claim the time to produce a new nanofactory is mass-driven because it can be unfolded from a 10 cm x 10 cm x 30 cm block to a 1 m x 1 m x 0.5 m assembly but give no detailed explanation or illustration. I think that would be extremely difficult if it's possible at all. I think doing it that way would require adding so many components to enable it that you probably couldn't still fit the whole nanofactory in the 3 product blocks, and it would probably be more difficult than just building much of the factory in its final configuration.

You claim the nanofactory can produce all the parts needed to replicate it but neglect whether the support equipment can be made from hydrocarbons. If one piece can't be built then you don't have exponential production, you have rapid production to the limit set by whatever's making the critical piece.

I add all that together and I don't see a rapid leap to an MNT economy created by the first assembler. You said "weeks." I don't see proof.

I responded to your comments on my post. I'd rather have the debate there if you want to get into details, since it supports threading. But for your specific questions above:

You asserted that the refrigerator would weigh too much to allow rapid replication.
The support equipment, including the refrigerator, will take time to replicate. You described replication time as only applying to the factory. So including the rest takes longer.
3 + 1 > 3
Does this prevent "rapid" replication by itself? No, but it's not as rapid as you said it'd be.

You asserted that the small parts would be terribly vibration-sensitive. I pointed out that their resonant frequencies will be in the GHz.

Okay, the assemblers proper may not need vibration isolation. But between that scale and the 10.5 cm final product there's going to be a lot of vibration-sensitive operations. If you don't isolate the factory you'll have perfectly good nanoblocks with screwed-up assembly.

Now here's a question for you. I want to make a 1 cm cube starting with six 1 cm x 1 cm x 0.1 cm diamondoid plates fabricated as a 1 cm x 1 cm x 0.6 cm block. How does the block unfold into the cube?

Tom Craver


Hmm - a new specialty occupation for the nano-age: origami expert!

Folding cube: Think of the cube as composed of two sets of three faces (ABC and DEF), each in the shape of a U, and linearly connected by hinges. Pick two adjacent sides of the cube - A and D (on the ends of the two strips, respectively) - and hinge them along a side as they sit in the cube configuration.

Now you can flip ABC and DEF strips over to fold onto the outside of the cube (with the two hinges of each strip folding in opposite directions), and finally fold A and D together to form a block.

The hinges on the thick 3D face plates need to allow at least 270 degrees of movement for AB and DE, and at least 90 degrees of movement for BC and EF.

Use a hinge as thick as a single plate. Visualize two plates A and B in their flat-folded configuration, with the hinge at the end of plate A, and plate B attached to the hinge only at one corner-edge. That hinge can then turn 270 degrees around from flat folded to form a 90 degree angle on the other side. The other hinges (BC and EF) only need to bend 90 degrees out from the flat-folded position, easily achieved.

Apply motors to the hinges, add hinge position and edge contact sensors, and locking mechanisms to connect the edges together once in contact. Simple - in theory.

If you want to make a 1cm plate fold out into a 10cm strip, you can do the "accordion" folding of the strip. Multiple strips could form a 10cm plate, but edge connections to adjacent strips will need to be made once the strips are in position. Or you could make a plate out of a single spiral-folded strip, which would greatly simplify lining up edges that need to be attached - they could simply unfold far enough and interlocking edges would catch on each other and lock.

The comments to this entry are closed.