• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed

  • Powered by FeedBlitz

« C-R-Newsletter #41 | Main | Spectra, Matrices, and Foci »

June 01, 2006


Feed You can follow this conversation by subscribing to the comment feed for this post.

Tom Mazanec

Is resitance rapidly shrinking or rapidly growing?


There is one point in this article I would like to address, that is to what extent opposition to these ideas are a remnant form of "vitalism". When you look at some of the more thoughtless arguments against MM in the early years of the debate namely, quantum mechanics, fat fingers, and sticky fingers; it seems that any of these would deal a fatal blow not just to MM, but to life itself. So now the argument is, "Of course some type of MM is possible, but it *must* somehow look identical to life." This is odd, it appears to me that this is some form of "lower-bound" anthropic principle at work. We all know of the weak anthropic principle as applied in physics. "The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the Universe be old enough for it to have already done so." -Definition from Wiki. It is clear that our model of the universe must permit some minimal complexity, or it is not consistent with our observation of our own presence. It seems that some scientists are saying not only must our laws of physics be consistent with the fact that we naturally-evolved, carbon-based lifeforms exist; but that they should be fined tuned to ensure that no molecular-manufacturing paradigm that is vastly superior to ours is permitted under physical law.

MM, if developed would usher in a new era in the complexity seen it the universe as profound as the start of life itself, or the formation of planets. If we had a million universes to choose from, many would have such stifling physics that stars and planets could not even form. Others would allow radical complexity such as living systems and Drexlerian nanotechnology. Still others would be somewhere in between; allowing the former, but denying the latter. What would be the odds that we would be in one of the small band that allowed life but not MNT? It has never made sense to accept that a system assembled by atoms randomly jostling about would represent the Zenith of what our universe allows. Much more likely to me that any universe that could allow mindless processes like Brownian motion, mutation, and natural selection to create natural nanotech; would have to have so much potential for complexity, that there would likely be many orders of magnitude of that complexity that could only be reached by thinking beings who know what they are doing. This is why I have always believed MNT to be possible, though I have no idea on the likeyhood of it being developed fully in my lifetime


What a facinating story, Mike, you'd be interested in the new developments in the robot, Babybot.... Have you seen that one...? Great posts here -- thanks!

Brain Based Business

Chris Phoenix, CRN

Off-topic comment ahead:

NanoEnthusiast, your ideas seem pretty close to Vernor Vinge's idea (in A Fire Upon the Deep) of a "slow zone" in which the level of technology is limited by physical law. The question is, how slow is our current zone? Is it just barely fast enough that humans can exist?

It's an interesting question, especially when you add the "unlikely physical constant" observation. This leads me to post a speculation my father and I came up with a few weeks ago: a new answer to the Fermi paradox.

It goes like this:
1) The universe was wired for a very high level of complexity.
2) A galaxy full of von Neumann probes is less complex than a lot of independent civilizations.
3) The number of independent civilizations can be maximized by implementing a "slow zone" that speeds up everywhere simultaneously, so that civilizations develop independently and simultaneously, protected in their early stages by speed-of-light barriers.

The mechanism for the "slow zone" is of course completely unknown to current physics.


Chris Phoenix, CRN

Tom, as far as I can tell, resistance is rapidly shrinking.


John B

What about Dr Moriarty's comments on replicability of accessing the same reaction site? Any pointers or work done to address that particular problem?

Any efforts on sensing failure modes- IE, detecting bonding to the wrong site, or contamination at the workstite/on the tooltip?

Both, to me, are current weak points regarding mechanosynthesis in the mode that molecular manufacture'll need.

-John B

Chris Phoenix, CRN

John, are you talking about bootstrapping, or in the final device?

Selecting reaction sites:

In the final device, selecting a desired site shouldn't be a problem because the mechanisms that cause drift are a function of large-scale mechanical implementation and piezo actuators. A 100-nm-scale device won't "drift" by 0.1%; instead it will have qualitatively different problems such as thermal noise, which appear solvable.

In bootstrapping, it depends on the technology you're using. For example, the biopolymer pathway to diamondoid will build small devices before you have to worry about selecting reaction sites. If you're using scanning probe bootstrapping, then there are a variety of position sensing technologies such as interferometery and capacitive sensing that can be applied to micron-scale probe tips for active correction--this requires engineering work but does not appear to be a fundamental limitation.

Detecting errors:

In the final device, the best thing to do with failure modes is to prevent them (including contamination), not sense and correct them. If they are rare enough, they don't have to be corrected--just shut down the device that failed and use one of its neighbors (simple redundancy). I would expect most mechanosynthetic errors to propagate (within the workstation) so that if left undetected, a few more atom placements in the failed area would jam the works in a way that's easy to detect. If a particular operation has a particular failure mode that can't be engineered out, then detecting a particular well-studied failure should require nothing more than a simple mechanically-based probe.

During bootstrapping, errors will be more common, but there'll be lots of room for human-mediated detection and intervention. I'd expect research-level fabrication systems to include scanning/imaging capabilities that could be used as a general-purpose error detector with some help from human image processing.


You know, to me at least, General Relativity seems much less complex than newtonian physics and the light-speed limit doesn't seem at all like complication but rather like simplicity. Since I expect the laws of physics to be simple (if they weren't Occam's razor wouldn't work) I expect light speed limits to be fundamental, though prehaps somewhat circumventable. A changing or eliminated light-speed barrier would seem utterly inelegant and unlike what I would expect from physics.

I have always seen "A Fire Upon the Deep" as a metaphor of just the sort alluded to above, a story meant to paint in bright contrast the absurdity of the sorts of beliefs that most people hold about the future, namely the belief that the difficulty of a technological achievement must be proportional to it's impact. Such beliefs clearly tell people that MNT is centuries away.

The comments to this entry are closed.