• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed

  • Powered by FeedBlitz

« Power and Struggle | Main | Global Nanotech Leaders »

April 24, 2005


Feed You can follow this conversation by subscribing to the comment feed for this post.

michael vassar

Here's a blog link involved in our conversation. The nominal topic is cheap & easy genetic engineering, but little of what is said doesn't apply to molecular manufacturing..

Mike Treder, CRN

Where's the beef--I mean, the link?


Patent and Copyright laws protect vested financial interests. Boyle considers this motive, but then dismisses it with the suggestion regulators are merely stupid, as if a regulatory beauracracy doesn't have similiar vested interests. 6000 children die daily in Africa for a lack of medicines at cost of production; a function of Patent laws designed to "grow" the African consumer market. Most people in the world cannot afford a computer and internet access to take advantage of "freely" available content. This isn't due to a scarcity of resources, but is facilitated by the same dynamic served by Copyright laws. As if merely being born into a country bountiful enough to have the capacity to bicker over Copyright laws isn't enough of an incentive to add to the global knowledge base... Whatever goo's or Hitler AI's occur in the future may have been preventable had the wealthy X% taken advantage of the intellectual potentials of all citizens.

michael vassar

Sorry, here's the link I meant.
Actually cdnprodigy, there will probably never be a goo, and there will *definitely* never be a "hitler AI". True AI only happens once, and it brings about singularity, for better or for worse. How it's made will determine whether anything survives singularity, but there certainly won't be any war, torture, or malice involved, just something perfectly banal, like a sphere of smiley faces expanding at the speed of light and replacing everything else.

Chris Phoenix, CRN

Mike, don't forget Glenn Reynolds; he's not focused on nanotech, but has written about it.

I'd caution about your list of nanotech experts; several of the blogs listed are inexpert about molecular manufacturing. Some of those are expert about nanoscale technology, but we shouldn't be implying that they are expert about molecular manufacturing.


Mike Treder, CRN

Chris, of course you're right that most of the bloggers I listed are "inexpert about molecular manufacturing." In fact, the only two that I would describe as expert about MM are Nanodot and us.

The article I was quoting from did not distinguish between nanoscale technologies and MM; it just said "six or seven experts in nanotechnology." By a loose interpretation of the term, Richard Jones, TNT, Howard Lovy, and Josh Wolfe all would qualify.

What's remarkable to me is that *even if* you include the broad definition of nanotechnologies, there still are only a half-dozen "experts" writing about it regularly on the Web.

The field is quite young, but like a child stricken with acromegaly, it may grow brute strength well before it achieves maturity. That is, MM could enable far greater power than our institutions are prepared to manage safely and responsibly.

The dearth of blogging experts, especially about MM and its implications, is strong evidence for the conclusion that the world is likely to be caught by surprise a few years from now. If so, nanotech could develop into an exceedingly ugly and dangerous child.


Michael, I agree threats of a goo-wake devouring earth's surface at the speed of sound is unlikely. But even Nick Bostrom's goo models would tie up a significant amount amount resources to combat. A couple dozen goo-bombs release simultaneously might be too much to overcome. Even one, operating successfully in the earth's mantle might not be defendable against.
I think questions about AI singularities ultimately reach very deeply into the field of personal identity theory. We want to know if we will be on the otherside of an AI singularity. Would our perfectly reconstructed minds down to whatever level of precision you care to measure, constitute us? If the AI is truly friendly, it would probably not waste any resources allowing our oft unfriendly minds to exist. If it is truly moral, it would probably recede away from earth at the speed of light and let us wallow in our own mess; we would have to deal with the emergence of multiple AI's. There is the possibility a singularity will have no bearing on human activities. Ideally, some sort of "grandfathering" rule would be in effect: progress with the contingency past actors get the fruits of the future. But an AI that can't find an infinite energy source is likely to cannabalize our brains (and all other matter) for utilitarianism.

The comments to this entry are closed.