Sorry I haven't been posting here more regularly. The biotech issue I've been working on has really claimed all my attention. I've been learning a lot about bacteria...
So far, conversations have mostly gone like this: I say to an expert, "What ____ is doing seems potentially very dangerous, because ___ could happen, or at least no one has shown me a reason it can't, and I've talked to several experts already."
I get one of three replies:
1) "I don't think that's worth worrying about, and I'm not inclined to look further."
2) "I don't think it will happen, because of _____." Then I look up _____ and find that it's factually incorrect.
3) "It seems like this is worth looking into further. You should talk to more experts."
Is it just me, or should I be getting more worried as I get more and more responses of type 2 and 3?
So far, I haven't managed to get any experts in the field to say "Yes, this is really worrisome, and I'll use my reputation to try to get the researchers to back off." Of course, this is a very hard thing for any scientist to say. There's a chasm to be crossed between "This might be a problem" and "This might be a problem I should act on, even if it means criticizing eminent fellow scientists." And the magnitude of the potential problem does not seem to make it easier to cross that chasm.
This has some similarities with molecular manufacturing, and some differences. One major difference is that molecular manufacturing is still in the future, while the potentially dangerous bio research is going on today.
A major similarity is that there appears to be a potential for self-replicating systems to be immensely powerful - more powerful than most specialists' intuitions - but that potential is only visible to generalists and systems thinkers. The experts don't easily see it, don't want to see it, and usually either dismiss it or treat it as someone else's theoretical problem.
For the past week or so, I've been in communication with the researcher who's actually doing the work I (and several other experts in various related fields) think is dangerous. As long as he's talking, I'm avoiding taking action that could start a grassroots movement against his work. Such a movement could be very powerful in the short run - and would probably create a lot more heat than light, causing future research along the same lines to be obscured and harder to regulate.
If any of you have experience with a case where a scientist was successfully convinced that their research was riskier than they thought - risky enough to substantially modify their plan and delay their work - then please let me know how that was accomplished. If no one has heard of such a thing happening... then what does that say about how the scientific community handles newly discovered risk?
(Yes, I know about Asilomar. That was one event, decades ago. How common are such things?)
I have never personally heard of a scientist being convinced that his own personal research is too dangerous to continue. For heaven's sake, Robert Oppenheimer pursued the fission chain reaction, even though no one (not even him) was sure what would happen, and some candidate scenarios included the literal destruction of the planet.
I think you will have a hard time convincing any scientist to take a public stand even against another scientist's work. There are two main reasons for this, one personal and one theoretical.
On the personal level, scientists are very reluctant to take a professional position on a question that isn't solidly within their expertise. In practical terms, this usually means that such a scientist would want at least to talk personally with the researchers involved in problematic research, and ideally he would want to have been involved in such research himself. Without that level of involvement in the issues, most scientists would consider themselves unqualified to render a concrete professional opinion on whether or not such research should be performed.
This ties into the theoretical issue. On the broader stage, saying that research should not proceed is functionally equivalent to saying that there are questions that should not be asked. There is probably no idea in the world that is more anathema to a scientist than this.
That's why Oppenheimer forged ahead, and that's why scientists are so reluctant to make such proposals. Before a scientist would say such a thing he or she would want to have to most thorough grounding possible in the details of that research. I suspect that the experts you've talked with are shying away from professional opposition for these reasons more than any other.
Posted by: Paul | May 15, 2009 at 11:59 AM
"I gave you your chance to play cop . . . and you blew it!" - Lt. Moe Tilden(Copland Character)
Posted by: the Oakster1 | May 15, 2009 at 01:03 PM
Look at the fuss over the LHC.
Posted by: Tom Mazanec | May 16, 2009 at 01:09 PM
Hey Paul, opera fan?
Posted by: Indio | May 18, 2009 at 08:39 PM
Hey Chris,
If you did not know a Science Fiction writer you know spilled the beans on the nature of your concern. I will try to be vague in my response.
I agree with you, this research is potentially very disruptive to the preexisting ecological relationships between this bacterium and the rest of the ecosystem.
And the microbe in question can survive in aerobic and anaerobic conditions, it lives in the gut of most mammals and birds, it lives in the soil and the water.
The biggest problem I see is that we know so little about the details of the ecology of microbes. It is very difficult to do ecological research on microbes in their natural environment.
Perhaps the way forward is ask the researcher to buy catastrophic risk insurance for the experiments he would like to do. That way he is working with outside experts to asses the nature of the threat. And he can reduce the cost of the insurance by adding protocols that decrease the danger.
Posted by: jim moore | May 19, 2009 at 09:48 AM
Sigh... in the same email where I told him about it, I said this:
"In this case, I think it's important that scientists self-regulate.
Creating [the microbe in question] will be pretty easy to do in a few more
years, and [a scientist] has already published an article recommending
it. If scientists don't change their collective mind before
environmentalists get hold of the issue, things will turn adversarial
- and the research will proceed. We can't afford that."
Fortunately, he stuck the mention of the research at the end of a post on a different topic. None of the comments have referenced it.
IMO, it would be appropriate (though not sufficient) to require any researcher creating such microbes to feed the parent strain to his children. Just to test whether he's _absolutely_ sure they won't grow in the wild.
The good news is that the current project will not produce a strain that's fully [what they're trying to build]. It's likely on the favorable (mundane) side of the overgrowth tipping point. So we may have more time than I thought.
Chris
Posted by: Chris Phoenix | May 19, 2009 at 11:39 AM
Perhaps we should have "science insurance", instead of the "Precautionary Principle".
"You want to do a nuclear fission chain reaction experiment - but you don't think it will *really* destroy the world? OK, we set the potential damages at $1 x 10^16, and an independent panel of experts reviewing your research sets the probability that you are incorrect at around 1 x 10^-8 - so pay $100 million up front."
"No refunds if you're correct - the money will be spent on amelioration (e.g. efforts to get humanity off the planet)."
"Your biological experiment would only cause $1 x 10^12 in damages? We agree with your projection of 99.9999% certainty it won't. So pay $1 million and go ahead."
"What's that? You say you can add some controls that will reduce the chance from 1 in a million to 1 in a hundred million? OK, if you can that, the cost drops to $20,000. Why not $10,000? You should have come up with better safety controls before bringing your research to the insurance board - now we have to review your proposal again."
There should also be rewards for reducing risks.
"You want to set up an asteroid monitoring program that reduces the chance of an asteroid destroying all human life on Earth by 90%, by giving us 20 years warning to do something about it? OK, we calculate that to be worth $1 x 10^9 a year, except that 90% of that value is already provided by various efforts we're funding to get humanity off the planet and in a position to do something about asteroids. So you get $100M a year.
Posted by: Tom Craver | May 22, 2009 at 01:07 PM