Over the past few years, CRN has made several suggestions for controlling molecular manufacturing. An example is giving away lots of restricted nanofactories to reduce the demand for unrestricted nanofactories. Most of these suggestions have been criticized as leading straight to massive oppression. While recognizing the possibility, I have also recognized that a lack of such administrative measures would also lead straight to massive oppresion.
This seems paradoxical. But I think the answer is that we have been thinking too small. We have been assuming that whatever we proposed would be implemented in the context of some bigger system. The trouble is that molecular manufacturing will be the biggest system.
Any system needs limits, or it will run off the rails. The simplest example is a reproducing population, which will indulge in exponential growth until it runs out of resources and crashes. Another example, I think, is the "excesses" of behavior that are seen in revolutionary contexts. So human systems need limits.
Through all of history, the presence of limits been a reasonable assumption. Other nations; geography, climate, or disease; distance; if nothing else, Kipling's "Gods of the Copybook Headings" would impose limits if a society ran too far off the rails. And occasionally a society would be stable long enough to develop and agree on a morality that provided useful limits.
It's tempting to think that the world has developed a new worldview -- the Enlightenment -- that will provide moral limits. I used to think that. I no longer do. The Enlightenment was supported by the brief period when people could be several times as productive with machines as with manual labor. That made people very valuable. However, now that we're developing automation, people can be many times as productive, and we don't need all that productivity. And indeed, the Enlightenment seems to be fading.
It's tempting to think that, left to themselves, people will be generally good. History, in both microcosm and macrocosm, shows that this doen't work any better than communism did. There will be a few spoilers/cheaters and everyone else will follow them straight into the pit. The occasional saint won't be enough to stop this degeneration.
It's tempting to think that, now that we have digital computers, everything has changed and the old rules of scarcity and competition needn't apply. I do recognize the fundamental unlimited-sumness of digital data transfer; see our "Three Systems of Action" paper. But digital information does not replace existing systems or issues wholesale. And I will not believe that digital domains can moderate their own behavior until I stop receiving spam and phishhooks.
It's tempting to think that an ongoing competition between humans would provide limits. But I don't believe it, for two reasons. First, such a competition would probably be unstable, winner-take-all, and end up in the same massive oppression that we're all concerned about. Second, the contest would quickly shift to computer-assisted design and attack, which would be even worse than all-out war between MM-enabled mere humans. It should also be noted that civilians would probably be a major liability in such conflicts, easy to kill and requiring major resources (not to mention oppressive lifestyle changes) to defend.
Back to the problem of no-limits... Molecular manufacturing will give its wielders extreme power. Certainly enough power to overcome all non-human limits (at least within the context of the planet; in space, there will be other limits.) And we're kind of short on useful internal limits; the current trend in capitalism is to deny the desirability of limits. What's left?
Somehow, we have to establish a most-powerful system that limits itself and provides limits for the rest of our activities. Long ago, Eric Drexler proposed an Active Shield. Others have proposed building an AI to govern us -- though they have not yet explained how to design limits into the AI. I have proposed creating a government of people who have accepted modifications to their biochemistry to limit some of their human impulses. All of these suggestions have problems.
Open communication and accountability may supply part of the answer. (David Brin has proposed "reciprocal accountability".) It's been noted that democracies rarely have famines or go to war with each other. Communication and accountability may be able to overcome the race to the bottom that happens when humans are left to their own devices. But communication and accountability depend on creation and maintenance of the infrastructure, on continued widespread attention, and on forensic ability (being able to connect effect back to cause in order to identify perpetrators). Recent trends in U.S. media and democracy are not encouraging; it seems people would rather see into bedrooms than boardrooms. And it's not clear whether people's voices still will be important once production becomes sufficiently automated that nation-scale productivity can be maintained with near-zero labor.
If we can find limits, then within those limits, some of CRN's suggestions will probably work. In other words, the problem with our suggestions is not inherent in the suggestions themselves; it is that the suggestions rely on something else to provide limits. Without limits nothing can be stable, but with limits, wise administration will still be needed, and our suggestions may help with that.