In the early years of speculation about risks and benefits of nanotechnology, a primary concern was so-called gray goo, hypothetical runaway self-replicating nanomachines that could, if unchecked, consume the planet. In more recent years, worries about such goo have receded, at least among the cognoscenti, and have been replaced by other fears.
Today, CRN and other investigators are more prone to highlight the danger of a nano-fueled arms race, which could be unstable and spiral out of control. Other issues of concern include environmental damage caused by overproduction of cheap products, ill-considered massive engineering projects with unforeseen disastrous consequences, social disruption from ubiquitous intrusive surveillance, and economic upheavals from the nearly overnight collapse of numerous industries. All of these are worthy of study today and perhaps of action at an appropriate time.
Another danger which has not received much attention, at least outside of a limited circle, is the possibility of runaway artificial intelligence. The projected ability of molecular manufacturing to build supercomputers smaller than a grain of sand, and to enable networking of these vastly powerful computers, could provide a substrate for a recursively self-improving "intelligent" software product. In theory, if this potentially smarter-than-human AI was not designed carefully in the first place, it might go off in a direction unfriendly to humans, or even to human existence.
This AI worry is one that sounds so much like science fiction that CRN has been reluctant to discuss it. However, as Stephen Hawking has said, "Today's science fiction is often tomorrow's science fact." We don't yet have an answer to this perplexing problem; indeed, no one has. But it surely must be added to the list of matters to be studied diligently as we head into the nano era.