I've been thinking recently about uncertainty, change, problems, and intolerable danger.
When someone invents a new thing, it usually creates some problems. Those problems should be evaluated, and a decision made as to whether the benefits of the thing are worth it.
Of course, not all of the problems can be known in advance. The question then becomes, how can unknown risks be predicted and evaluated? If something bad happens, how easy will it be to repair or compensate for the problem?
There are, it seems to me, several different levels of problem that can arise from a new thing.
First is engineered effects. If I cut down an acre of trees to build a factory, I know that the trees will be gone. If I take a sleeping pill, I know that I will get sleepy.
Second is side effects. The factory will increase traffic on local roads. Antibiotics may upset the stomach. Side effects can often be predicted, though their exact magnitude may be difficult to predict, so there may be disagreement as to their expected cost.
The third level is emergent phenomena. Emergent phenomena arise from a combination of multiple factors. They are often very difficult to predict. Two medicines may interact dangerously, where each by itself would have been mostly harmless. Chernobyl might not have blown up if it had not been operated in the Soviet Union by operators suffering from massive indoctrination to the effect that all Soviet reactors were inherently safe.
The fourth level is new and self-sustaining patterns. If the factory imports used tires, and the tires import alien mosquitoes, the mosquitoes will not go away if the factory stops receiving tires. If the medicine causes cancer, stopping the medicine will not fix the problem.
The fifth level is self-evolving patterns. These can be harder to fix than self-sustaining but static patterns, because they present a moving target.
Different people will tolerate different levels of problems. Some people, of course, are uncomfortable with change of any kind, even deliberate change. In contrast, an engineer might be pretty comfortable with side effects--most are fixable, because they are causally linked to the engineered system.
Emergent phenomena can be harder to get a handle on--hard even to notice that there's a problem, then hard to know exactly what causes it. In the real world, with millions of causes and effects and interactions, an observed symptom may not be traceable to a known cause. The corollary is that a new thing will probably cause at least some processes that go unnoticed and problems that go unattributed. (Of course, there will likely be unattributed benefits, as well. But few people would choose to accept an unknown risk in exchange for an unknown benefit.)
Self-sustaining processes are obviously problematic. The phrase "playing with fire" is very apt. If the scope of the process and the means of controlling that scope, are known with certainty, then it may be acceptable; otherwise, most people would probably not be at all sanguine about a course of action that could start a self-sustaining process.
As people gain a deeper understanding of how things are interconnected, more and more things appear to move farther and farther down the scale, into levels that are almost certain to cause discomfort. Even if the factory does not dump chemicals, will increased runoff from the deforested land cause reduction of water quality downstream? Will the factory attract crime? Are its products socially responsible?
Too much thinking about possible harms, especially those farther down the scale, can cause paralysis. A counter-reaction is to assert (both to oneself and to others) that one's pet project is not really that far down the scale. New technologies? Consumers can choose how to use them wisely--no unexpected side effects. Artificial bacteria? They'll be confined to the lab (the system is engineerable), and even if they escape, they won't survive in the wild (the limits are all known). Nanomaterials? Well of course that Drexler stuff is impossible--no self-perpetuating patterns here--and we're just making smaller versions of known materials so there aren't even any emergent properties to worry about.
A mature molecular manufacturing technology will cause effects all up and down the scale. Side effects: less land covered in mine tailings, but more in solar cells. Emergent phenomena: social disruption. Self-perpetuating phenomena: At least potentially, black markets in less-restricted technology, and unstable arms races.
Molecular manufacturing risks may go even farther than that, by enabling humans to do things they couldn't do before. Humans may find it easier to build new self-propagating systems, with either a biological or non-biological approach. And if MM accelerates AI research, it introduces a new level of risk that's not even on the chart, because we have no experience with it: evolving systems that can change their own substrate. Such things may have the potential to accelerate beyond all hope of catching them.
In recent conversations about diverse sources of risk--very diverse, including artificial bacteria, runaway AI, and the wisdom of beaming messages into space to attract aliens of unknown character--I have come to the tentative conclusion that every argument, whether it claims to demonstrate increased or decreased risk, has an exception. It is very difficult to prove or disprove a risk.
In the end, people will see the arguments, and proceed to apply their native optimism or pessimism to the situation. So are people too optimistic, or too pessimistic? That is, will they tend to underestimate or overestimate risk? People seem in general to be pessimistic; they probably wouldn't take an unknown benefit accompanied by an unknown risk, and would even be suspicious of an unknown benefit with no strings attached.
On the other hand, a major reason for optimism--that the world's systems are far larger than our ability to affect them through engineering--is rapidly becoming less true.
If you were hoping for a conclusion, I don't have one.