In high-volume molecular manufacturing, computers can't be used to control robots handling individual molecules, because computation requires too much power. However, computer-like control can be achieved without the use of computers, which means that robots can still be used where appropriate.
If every move of a robot had to be computed each time a single molecular fragment was added to a product, then products would cost vastly too much per kilogram - or even per microgram. For this reason, Eric Drexler has recently written that robots shouldn't be used in the smallest stages of molecular manufacturing.
"Any operation that requires computation would be far to slow and expensive — we live in a world where machines are huge compared to the circuitry in a microprocessor. In the nanoworld, it will be the digital computing systems that are huge compared to the machines. This is why machines of the general sort in the video — fast and brainless — will become essential."
There are actually three or four reasons not to use robots hidden within that statement. Let's consider them separately:
- Computers will be large compared to the manufacturing systems. This is true, at least for the kind of mechanical computer that Drexler analyzed in Nanosystems. But if a computer is used to drive multiple manufacturing stations, its size doesn't have to be prohibitive.
- Computation requires too much energy for computer-controlled robots to handle individual molecules. This is also true, if each motion of the robots requires computation. But this doesn't have to be the case.
- Robots may be larger and slower than special-purpose machines.
So what is a robot? For industrial fabrication purposes, I'll define a robot as a machine that can perform any one of several trajectories, under external control, during the manufacturing operation, to produce multiple different products. In other words, if a single machine can perform any of several different operations on its inputs, as selected by an external controller, in order to produce several different products from similar inputs, then that machine can be considered a robot.
The point of a robot, then, is to pack the functionality of several machines into the space of a single machine, and let the controller select which machine the robot is supposed to implement for any given manufacturing operation. The benefits of doing this are clear - fewer machines are required, and the machines can be simpler because complexity can be shifted to the controller.
Let's talk about that controller. In order to direct a robot through a complicated series of motions, the controller has to feed a series of instructions to the robot. But - and this is a key point - the controller does not have to compute those instructions in real time. The instructions can be a pre-compiled recipe. Do this set of 5,000 steps, and you get a cubic nanometer of diamond; do that set of 7,500 steps, and you get a nanometer of carbon nanotube. The lists of steps can be computed when the product is designed, and only copied from place to place when it is manufactured.
Computation - the generation of new patterns of bits - has an irreducible physical energy cost, which on the scale of molecular manufacturing is quite large, and even reversible computing is only a partial help. But simply copying a pattern of bits has no irreducible energy cost (as long as the previous contents of the destination are known, so that they can be erased efficiently).
In Drexler's mechanical computers, a bit is represented by a tangible motion of a physical rod - a motion that is well-suited to controlling and perhaps even powering a robot. So something as simple as stepping through a pre-defined sequence of memory locations, pushing rods out the side of the computer that correspond to the bits in each location, could control a robot to perform a task of arbitrary complexity.
Of course, it takes energy to compute the lists of operations in the first place. But nano-built products will be highly repetitious; a recipe for a cubic nanometer of diamond will be re-used many times. The number of bits in a blueprint for a product may be vanishingly small compared to the number of atoms, and still specify the exact position of each atom in the product.
For some operations, it will be suitable to build single-purpose machines that can only do one thing. But for other operations, using externally controllable machines - robots - will make the nanofactory smaller and faster to build, and more flexible to use. A nanofactory using robots can easily build products of greater mechanical complexity than the nanofactory itself, and can build products that weren't designed when the nanofactory was designed and built. Speaking as a software engineer and a theoretician, I'd hate to give up my manufacturing robots. The good news is that I don't have to.
From reading both articles, it appears to come down to what is classified as a 'robot'. Or maybe 'general purpose robot'. The limits and requirements of both flexibility (mechanical? motion possibilities), and intelligence. Seems that things you class as robots, he does not. Does not matter to me whether the 'machine' is called a robot or not, if it can [be 'instructed' to] move the needed 'parts' around to create the desired assembly(s).
Posted by: mMerlin | March 14, 2009 at 09:33 PM