• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed

  • Powered by FeedBlitz

« Skeptical on Skepticism | Main | CRN Expenses »

September 11, 2004


Feed You can follow this conversation by subscribing to the comment feed for this post.


Yay! Congrats, Chris!

I liked Toth-fejels study a lot, must be exciting to know you're going to work with a like-minded person with such good ideas.

Great great great!

Don't disappoint us, heehee

Mike Deering



Wow, very impressive! Congrats! Now leave an impression!

Brett Bellmore

Regarding automated assembly of large, awkward shaped objects, a suggestion:

Don't try to use convergent assembly the entire way up. A convergent assembly subsystem could feed blocks of some managable size, say 1 cubic mm, to an array of distribution channels and assembly arms, which build up the product one layer at a time. Assuming, simply to throw out numbers, that two assembly arms are placed in each square centimeter of the array, (One to assemble, the other to support the product.) and that they operate at a rate of 100 blocks per second. Assuming that the prior convergent assembly stages and distribution system could keep the arms fed, the product would be assembled at a speed of one millimeter per second, a full meter of product thickness in under 17 minutes. Clearly, nothing to sneeze at. So we don't lose much by terminating the convergent assembly short of the last stage.

The advantage of this scheme is that the assembly array could, among other things, build extensions to itself, to handle unexpectedly large products. And because it's capable of building products comparable to it's own size in two dimensions, and arbitrarilly long, there's no need to design the products in a compacted state which unfolds after they're finished. Products can be designed and manufactured to their final conformation.

Finally, the array can equally function as a disposal device, disassembling unneeded products. And by simply assembling the necessary surface, it can function as a display device, data entry terminal, table top, or floor.

jim moore

Way to go!!

I too have been thinking about nano-factory and nano-product design issues, and I have come up with a suggestion similar to Brett's. You may want to think about terminating the convergent assembly process at 10-100 microns, but don't produce a block make a fiber. The fiber should be able to:
1.) maintain a vacuum internal to the fiber
2.) interface with surfaces of other fibers
3.) have simple and standard conduits for power and information
4,) protect the internal nano-machines from UV radiation
5.) be able cleanly "cut" a fiber in two and attach to the ends of another fibers
Nano-products can be "woven" together then "unwoven" and the fibers can be reused in new products.

This continues the process of throwing away design space in order to achieve greater simplicity.

Brett Bellmore

It strikes me that your proposed fiber would be more like a varient on the utility fog concept, than a way of building structures fully optimized on a molecular level. Which is not really a bad thing, as having some intelligent material around on a spacecraft would be very handy, and such fiber would be a lot stronger than the fog.

But the specific application here is space manufacture, where you're liable to need fully optimized products. Your fibers probably wouldn't make very good rocket engines, for instance, though they'd make darned good space suits.

Michael Vassar

GREAT! Congradulations Chris.

jim moore

I see Machine Phase Fiber as a cross between Chris's Nano-factory and Josh Hall's Utility Fog.
Like Utility Fog, Machine Phase Fibers have a simple, standardized shape but, the different segments of the fibers can contain different functional nano-blocks were foglets are all the same.
In terms ease of reusability of parts a product made from Utility Fog is easiest to take apart and reassemble into something else, then would come a product made from Machine Phase Fibers, and a product made from Chris's Nano-factory would be the most difficult to take apart then reassemble something else.
In terms of being optimized for a particular task a product from the Nano-factory would be the most optimized and the product made from Utility Fog the least optimized.


First of course congratulations Chris,

I think we should all do everything we can to support a and help Chris in his activities. Although I'm sure some of the information is classified we could still give our opinions on possible scenarios for helpful directions of research.

First I think we should address where we are today in the technology. Once this has been clarified in particular the current state of chemistry, physics, mathematics, computer science, and so on. Once we have established where we are we can then look forward and planned for the future. As to the issue of space flight which I'm sure is foremost on NASA mines. It would seem to me this is simply a vehicle to continue the research in MM. A relationship with NASA is perhaps a best fit for MM. As NASA has the facilities, computers, people, time, and motivation for this undertaking. So once again we all wish Chris the very best and Godspeed.

I have read and reviewed the particulars on what I call "Lego blocks building assembler". And as I have said in the past is my opinion that the first step in the 14 step process to a kilogram size useful product is the most critical and difficult. Sidestepping this issue by utilizing a feedstock material that is the first step reduces the complexity substantially. As stated if we began with a few hundred molecule block and move up each generation joining six or eight blocks in time we can construct kilogram size products in some 14 generations or so.

One question that comes to mind is in the relationship between computers and the useful product described above. If each block were 200 diamond molecules. Then it would seem that the smallest element within a CPU would need to be at least 200 molecules across. This size seems relatively large considering the current state of lithography. So I would ask in this scenario this particular construction method would not seem best fit for producing CPUs.

A possible fix for this problem is to design the individual block with a series of single wall carbon tubes within the block. This would provide for a increase in complexity and available designs using this feedstock.

Another question that comes to mind is the issue of temperature and diamond burning. If in a future time a spacecraft is constructed of diamond it would seem the re-entry temperatures and indeed perhaps the lift off forces could cause the diamond to burn. This issue would seem to need to be addressed although in the event the ship is constructed here on earth of other materials and then deployed. Then feedstock is provided to the ship one could envision additional modifications made at that point.

It would seem the fix for this problem is to utilize at least 2 feedstock element one diamond and the other a material capable of a standing temperature variations exceeding that of diamond. This brings us to another question which is in a best case scenario would it be better to have only one feedstock or are the benefits outweighed by having six or seven dfferent feedstock elements.

I will continue to give the issues additional thought and will get back to everyone in the time to come. This particular format is perhaps helpful for everyone to identify a specific question and give a specific solution to the question. As we seem to be in somewhat of a habit of defining questions without giving any sort of possible fix for the problem


I have reviewed my previous post and have determined I was thinking on a two-dimensional plane when I stated the smallest element within a CPU would need to be at least 200 molecules across. Indeed if the entire diamond block were made up of 200 molecules the distance across this block would only be some six molecules as we are discussing a cube. At this size we do see a situation where CPUs could be produced in number and with quality consistent with the finest technology available.

Chris Phoenix, CRN

Wow, thanks, all! Both for the congratulations and for the advice. I haven't even started work yet, and already people are giving me good ideas!

I'll be looking at all sorts of ways of making products. Whatever the process, I suspect the result will be mostly hollow because you just don't need much strength for most products. Probably the shape will be maintained through pressurization, though fractal trusses are a possibility. But it's nice to have a smooth skin to keep out dirt and light, and in that case why not pressurize it?

A product that's flimsy until pressurized may be hard to build full-size: hard to build the thin walls freestanding. Easier to compress it and attach the blocks where they can be supported.

I'm not sure the fibers are an improvement over the blocks. Weaving them together would leave gaps. And blocks can simulate fibers, but fibers can only simulate blocks if they're made of blocks anyway. So physically I don't see the advantage. As a design level of abstraction, they might be useful. I'll think on that some more.

Todd, the problem with complex feedstock molecules is that they're expensive. It's probably better, even at the cost of energy and time, to use very cheap chemicals.



When quantum computing comes of age, you can forget silicon-age thinking. Moore's Law will become "that old speed limit", a quaint remembrance in a future of disappearing limits. by Fidgital

According to a report from Yale University's research laboratories, a way to read a qubit's state without changing it has been created.


I read this today although the news is perhaps a few days old. Quantum computers are one of those destabilizing technologys. Like MM, DNA research, organ printing, and others. We speak much of MM and the concept of a self replicationing system from the standpoint of moving molecules around. It should be noted there is a second self replicationing system that is sometimes overlooked the system is robotics. It would seem to meet the core issue with robotics is pattern recognition the ability for a computer/robot to identify things around it. That is to say the ability of a robot to see and know want everything is in its environment and its relationship to everything in its environment and the relationship between everything to everything.

The solution to this problem is in one case a large database of digital pictures of everything. This database would contain digital photos of telephones, computers, people, houses, cars and the like. As a robot moved through its environment it would identify everything based on patterns recognized within the database. This is a performance driven application and requires substantial computer time. But with the use of quantum computers and their improved ability to pull data from a database the computer time required is reduced to near 0.

At this point I am not prepared to definitively say that perhaps a 100 bit quantum computer would be capable of AI. But I would not rule it out as a possibility. However a 100 bit quantum computer would be capable of accessing a large database and identifying things at a rate unprecedented by today's standards. Also the size reduction of the quantum computer would allow for this device to be carried in a robot of mansize.

As the quote states above we should not limit are discussion to only a 100 bit quantum computer indeed we see a situation utilizing silicon that the manufacture of a computer with thousands or tens of thousands of quantum bits could be obtained with relative ease. Again following this train of thought one million or even one billion bit quantum computer would seem not outside the realm of possibility given that each bit is only perhaps one million oscillating microwave photons. Although I do admit they do not give a total size of the device and it is unclear how large the "Cooper box" is as well is unclear how large the support devices are that provide power and microwave and readability of the photons.

So we see a potential situation even in the near term where substantial quantum computers become available and immediately change everything. One of the applications that would seem foremost other than AI is the use of quantum computers in DNA research. As one could assume all combinations of DNA could be inputted into a quantum computer and the relationships between each DNA fragment could then be computed.

This technology seems to be running parallel to MM and to other destabilizing technologies. I must say I am following with great interest and anticipation of the changes coming.

Tom Craver

Chris: Re using inflated structures in space:

Space structures can get to rather extreme temps - including low enough that many gases will liquify even at relatively low pressures. You wouldn't want your spaceship going limp just because it got too cold. Nor would you want an over-pressure situation causing damage if it gets too hot, e.g. during atmospheric re-entry or aerobraking.

You might deal with this with sensors and heat pipes (or heaters), though that adds to nanoblock and system level complexity of designs. At a minimum, heat flow around a structure should be considered - e.g. if a large surface area structure is connected only at relatively small points, can enough heat flow into it to maintain inflation?

If you've got plenty of mass and time and energy (which seems likely in many cases), it probably makes sense to think in terms of statically rigid structures, at least for anything on the exterior of a space vehicle - e.g. a landing strut for a lunar lander. With zero-defect materials, a spacecraft would still be very light and high performance - better to 'over-engineer' early on, and leave optimizations to future generations with hands-on experience.

Chris Phoenix, CRN

Tom, I hadn't even thought of that.

But there aren't many structures that have to be big and hollow. Structures with people in them have to have controlled temperature anyway. Tanks can collapse when empty, as long as the material doesn't flex enough to damage itself. Things that have to be held apart (telescope optics, nuclear reactors) can station-keep or spin on tethers. (NIAC produced a great design for a huge multipart stationkeeping telescope with a thin dynamic primary.)

One other thing I just now realized about inflation: the mass ratio of compressed gas to storage tank is independent of the size of the tank. So if you want to ship up an inflatable structure and the compressed gas to fill it, then to a first approximation, the gas tank will weigh as much as the structure! Of course you can get around that by storing the gas cold or chemically bound.


jim moore

With a large inflatable structure most of the mass will be in the gas. So the only way to get around lifting it off Earth it is to get your gas in outer space.

Chris Phoenix, CRN

Jim, today's most advanced hydrogen-gas tanks are doing well to store 10% of their mass. Assuming a 10X increase in material strength, the mass of the gas will be about equal to the mass of the tank. (Assuming you use H2.)

The point of inflated structures is not just to save on volume, it's to save on mass by eliminating structures under compressive stress. I think that even a 3X mass hit (the storage tank, the gas, and the structural tank all weighing about the same), you'll still frequently be better off than building a structure with compressed members. Fractal trusses *might* change my opinion on that.



In terms of being optimized for a particular task a product from the Nano-factory would be the most optimized and the product made from Utility Fog the least optimized.

The comments to this entry are closed.