So here I am at this conference on self-assembly, and I'm talking with people about Rothemund's DNA staple technique. The technique is elegant, simple, and powerful. Mix a long DNA strand with a bunch of short strands. The short strands bond to different regions of the long strand, and pucker it up into a shape.
Person after person has told me that they never would have expected this to work! I was really surprised at that, so I've started having conversations about it.
One of the most common reasons has to do with entropy. Perhaps it's not a coincidence that entropy is one of the most common objections to molecular manufacturing. There's a whole lot of very solid theory about entropy, which says that unstable molecules will fall apart. But what's often missed--by the people applying this theory!--is that it assumes a steady state, also known as infinite time! In other words, entropy says diamonds aren't literally forever... but in the real world, we can ignore that. Nanomachines built of diamond could laugh at entropy for far longer than we actually care about.
So, back to entropy vs. DNA self-assembly. I was talking with a physicist over lunch today, trying to figure out why he expected DNA shapes to fall apart on entropic grounds. It turned out that he was right - the shapes will eventually fall apart - but he hadn't thought about the time it would take.
Another entropy-related reason that's often cited is that DNA staple shapes require combining hundreds of molecules into a single structure, and the entropy of a single structure is a lot lower than the entropy of hundreds of fragments. While this is true, the fact remains that DNA does prefer to join in double strands. I haven't found anyone who can explain why increasing the number of strands would significantly reduce the willingness of each strand to join the structure - but I've found lots of people who seem to believe this would be the case.
I can understand how assumptions from one domain (say, the physics of small molecule chemistry at equilibrium) can make it difficult to invent a new concept in another domain (say, DNA binding). It's a bit harder for me to understand how that assumption wouldn't be questioned when confronted with a really good idea.
In fairness, it's true that many seemingly good ideas don't work for reasons that couldn't easily be foreseen. So rejecting a seemingly good idea isn't always a destructive thing to do - it can save a lot of effort that would be wasted trying to develop the idea only to discover that some minor practicality prevented it from actually working.
But I'm starting to think that the stated reason for rejecting the idea is frequently unrelated to the reason that the idea wouldn't actually work. In other words, science has developed a rule of thumb that works: Reject most ideas, accepting only the most compelling and fully demonstrated ones. And a procedure that works: To reject an idea, find and cite some overly-generalized and poorly understood theory. The fact that the procedure is groundless does not mean that science's rate of rejection of new ideas is wrong. The rate of rejecting new ideas may, in fact, be highly functional.
If the rejection of new ideas is basically random, then I'm saying that science advances by blind luck. In fact, that's plausible. Any exploration of a sufficiently unfamiliar and difficult problem domain must advance by blind luck. What makes science (and evolution) work is that they can, after the fact, detect and preserve lucky accidents.
A question we might ask is whether things would be better with a somewhat lower idea rejection rate, and if so, how to shift the rate. The answer is probably somewhere along the lines of increasing funding for researching crazy ideas. I don't know if there would be any point in trying to convince scientists that they're routinely rejecting ideas randomly.
Another question we might ask is what to do when scientific advances are needed to achieve a technological goal. The answer seems to be: Don't expect the scientific establishment, as a whole, to work toward that goal, no matter how workable or clearly articulated or well-calculated. The goal will simply be treated as a new idea, and rejected. Instead, find a way to pay individual scientists to work on it.
By the way, I just heard a talk in which a DNA cascade circuit was developed and tested to compute the square root of a four-bit number. It involved about 130 strands, implementing about 76 logic gates. It would be very easy to find a theoretical reason why this couldn't possibly work... but in fact, it did work.
Intresting idea. I have not thought this way before, but now I think you are exactly right.
Not only in science, but everywhere. People seems to reject new ideas based on some empirical "utility function" that includes credibility of the author, previous personal expirience, mood state, time of the day and a number of other factors.
And opinions on new ideas tend to form once and for long.
And yes, telling people that they reject ideas randomly will do no good. If you understand "utility function" of particular person you can exploit you knowledge to make him accept almost any idea you want. Otherwise, you are counting on blind luck.
Posted by: Dan S | April 29, 2010 at 04:53 AM
I think scientists tend to reject anything that isn't "obvious in retrospect" (i.e. once they hear of it, is it pretty obvious how one gets from the previous or current state of art to the new level?). Incrementally better results from modest changes to existing methods are accepted as worth looking into, reproducing, expanding upon, etc.
Building "anything" out of DNA is much more acceptable once someone has demonstrated building 3D boxes with controllable lids, which was more acceptable after someone demonstrated building interesting tiled objects, which was more acceptable after someone demonstarted linking fragments of synthetic DNA, which was more believable after someone developed the ability to synthesize and replicate arbitrary short DNA segments, etc.
Propose in the 1980's that it should be possible to build machines that build nanomachines that build anything from the atoms up, and with no obvious path in place, scientists will assume that the chain of required developments is long, and any stage of that has a chance of failure, and so the cummulative chance of failure approaches 100%. Which is true, for any single development path one might propose, but ignores the likelihood that there are a huge number of paths to take.
But the chances of any individual researcher betting his career on developing a correct path, in the face of so many wrong paths, is essentially nil. Rather than admit their self-interested conservativism, they throw out a likely sounding objection, that can't be proven wrong until someone else puts in the effort to develop a path that avoids the objection.
The difference between a 2nd rate and 3rd rate scientist is probably intellect - but the difference between a 1st rate and 2nd rate scientist lies in how daring they are.
Posted by: Tom Craver | April 30, 2010 at 02:16 PM
Tom, the impression I'm getting from the experimentalists here is that nothing is "obvious in retrospect." Any given idea might fail for some small random reason. The good ideas are the ones that take "only" a few years to develop.
So it makes sense to say "scientists reject anything that won't obviously work" but that equates to "Scientists reject anything."
BTW, can anyone tell why it takes so long for the "Post a comment" form to appear?
Chris
Posted by: Chris Phoenix | April 30, 2010 at 02:27 PM
I may be misunderstanding you, but again, I think it depends on how big a jump some development takes.
The "obvious in retrospect" rule of thumb is just meant to categorize the sort of work that is more likely to be taken on - i.e. something that a scientist can reasonably expect other scientists to quickly accept, once they've done the work. One can always find some scientist that will reject any speculation no matter how trivial - but smaller, less risky ideas and steps are more likely to find some scientist willing to take a chance on it, because there is a continuous low level pressure to make SOME sort of advance - publish or perish.
It may seem obvious that there must be some viable path to developing mature molecular manufacturing - but scientists aren't willing to bet their career on finding the right path, and if they feel pressured to bet their careers on that goal, they'll come up with rationalizations for why it isn't worth striving for.
Only when outsiders with a strong motivation put up both the financing AND the pressure for success does goal oriented science proceed at the rapid pace an outsider would naively expect. I.e. an Apollo or Manhattan project.
We came within an inch of getting that with the NNI, but those who politically initiated it didn't really understand what was required or lost their nerve in the face of resistence from scientists, while scientists saw a sweet funding source they could divert to fund ordinary low risk research by sticking "nano" in their research proposal.
Posted by: Tom Craver | May 03, 2010 at 11:13 AM
Tom, I think we may be more or less agreeing. Yes, if something is small enough, it will be more likely to be accepted.
There's another factor that we should consider: the development of research communities. A research community may develop a common body of knowledge of what works, and then continue to develop new results and successes within or slightly outside that area.
My impression is that, without a research community, it's rather hard for an idea - even a related idea - to get traction. Even an idea like DNA staples would have found it hard to get traction within the existing DNA community, unless Rothemund had demonstrated it himself first.
Chris
Posted by: Chris Phoenix, CRN | May 03, 2010 at 04:59 PM