• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« Google 4 Responsible Nanotechnology | Main | Surveillance and Privacy »

August 12, 2004

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451db8a69e200d8342fdbf053ef

Listed below are links to weblogs that reference Nanotechnology Issues:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

John B

Do you REALLY think you can get worldwide acceptance and enforcement of nanotechnology limitations? Why - what gives you confidence that this is possible?

One classic counterexample of international failure along these lines include international copyright legislation.

-John

Janessa Ravenwood

John: We've tried to convince them of that, believe me. They're eternal optimists on this one. They won't believe it's not possible until multiple nanofactories are already developed in multiple countries around the globe.

John B

Perhaps a more useful approach, instead of attacking CRNano's approach as I foolishly did, is to find a reasonably non-destructive way nanotechnology could be developed under a (IMO) more realistic scenario. Unfortunately, I don't see a really good way for this to go.

The classic example in history which keeps popping up in my head is the middle-ages history of the crossbow. Due to how easy it was to learn to use and capability to penetrate the best armor around, it was declared a weapon unfit for christian hands. Which tickled the nobles pink, because they could then denounce anyone who used such weapons as un-Christian, while going out and hiring non-Christian mercenaries who just happened to be equipped with crossbows!

I wonder if there won't be a similar situation in the first world, condemning nanotechnology as 'too dangerous' or some such, while those at the top of the food chain hie off to a third-world nation with pro-nanotech legislation in suitably regulated (read, 'taxed') facilities. I'm afraid I wouldn't put it past certain large commercial organizations to try and monopolize nanotech in the current factory-centric production model, with only 'black budget' programs allowed to utilize non-facility nanotech.

In short, not too rosy a picture.

Can anyone lend a hand here and come up with a better scenario, based on the premise that international legislation won't be universally accepted and equally enforced?

-John B

Janessa Ravenwood

That's what we've been trying to think of here. So far, no consensus.

Brett Bellmore

I don't think there IS any big, overarching solution. Just a lot of tiny little pieces that accumulate to something effective. And "effective" doesn't mean ironclad, it means good enough to keep the body count down. After all, people die today, of all sorts of causes... Why should we expect that to change, short of major alterations to the nature of humanity itself? Here's some ideas:

1. Avoid anything in the way of design censorship which would cause people of reasonably good will to seek out a black market. There's going to be a black market in any case, but the smaller you can make it, the less it aids people of truly malign intent. Which means paternalistic prohibition just isn't an option.

2. Flood the system with rigged "bad" designs and utilities, such that people who go looking for them find the boobytrapped ones before they stumble across something real.

3. Any time a security hole is found, USE IT to distribute benign viruses which plug it, or at least point out to people that they need to upgrade. There's really no reason only bad guys should be exploiting bad security.

4. Build security into the communications from the ground up, even at the cost of a lot of performance. There will be performance to spare anyway. And no back doors! Realistically, there's no way to keep burglars from finding them.

5. Encourage the use of multiple, competing design validation organizations. (A monopolistic one would be tempted to violate principle 1.)

6. Make a fairly complete suite of self-reliance product designs available for free, and encourage free distribution of useful designs through prizes and social recognition.

7. Harden society against the disasters which WILL happen, (They happen today, after all!) by switching to a more distributed, fault tolerant model for all utilities, and building protective features into everyday products. Like building codes for homes that require them to be effective biowarfare shelters, smart cloth clothing that knows how to become hazmat gear, local caching of vital supplies.

8. Harden the people themselves. Anything short of that is Maginot line thinking.

9. Get some of our eggs out of this basket. Also a result of following rule #1, as a lot of people want out of it, and with nanotech would get out, unless forcibly prevented.

10. A toughy: Avoid "monoculture" in nanofactories and control software, in order to minimize the chance of everything being vulnerable to the same exploit. Maybe some system where the functionality is described at a high level, and implemented in somewhat randomly chosen ways?

Chris Phoenix, CRN


With our ability to impose our patterns on everything from the molecules up, the world will become like a computer with ten billion semi-trained programmers that can not only overwrite its own operating system (social) and disk (biological) but flash its own ROM (inorganic).

What CRN is proposing is perhaps analogous to an operating system, in which all the programs are restricted from accessing the lowest levels directly. OS's tend to impose complexity and inefficiency for the sake of formality and restriction.

It might be possible to skip a step and deploy the equivalent of scripting languages, even on a system without memory protection. Scripting languages tend to impose simplicity and inefficiency for the sake of expressive power.

Maybe, to continue the analogy, all the programmers would be too busy scripting cool new web applications to want to write in languages that can directly mess with the lower levels of the system.

In other words, one possible alternative (or supplement?) to the centralized-security model is to direct everyone's creative energy toward new domains rather than to messing with existing domains.

Would this solve the entire problem? No. Would it solve enough of the problem? That depends on how sensitive the lower levels are to direct imposition/embodyment of intellectual patterns.

This has been a distillation of a week's worth of very high-level theoretical thought. Feel free to ask questions.

Chris

Chris Phoenix, CRN


The previous post was an answer to John B. This one is an answer to Brett.

Most of your points sound like good ideas. I like the idea of competitive validation organizations (point 5). Maybe there could be a bounty or reward for orgs to find security holes which slipped past other orgs.

I tentatively disagree with point 10 (avoid monoculture). Diversity increases the chance of a security hole. If a security hole only caused local damage, I'd agree that this was a good tradeoff. But if a security hole takes out a neighborhood, then having multiple designs in each neighborhood makes things worse than monoculture, not better.

At least one security dynamic is different between nanofactories and computers. In computers, the speed of spread of a pathogen is related to the fraction of vulnerable computers. In nanofactories, transmission to nanofactories will probably not be dependent on transmission from infected nanofactories, so having a lower fraction of vulnerable nanofacs will not reduce the speed of spread.

Chris

Chris Phoenix, CRN


Brett, can you tie your suggestions in with our Thirty Studies--perhaps #22 "How can proliferation and use of nanofactories and their products be limited?" or #29 "What policies toward administration of molecular manufacturing does all this suggest?"
http://crnano.org/study22.htm
http://crnano.org/study29.htm

Or can you identify some questions that we should be asking instead that will generate this useful set of answers (and maybe others as well)?

Chris

Brett Bellmore

"Diversity increases the chance of a security hole."

Yes. I think we've got a very fundamental disagreement over aims here; Your aim is to avoid nasty things happening, even at the cost of their being worse if they do happen. Like opposing early, self-sufficient space colonization, in order to avoid the possiblity colonists might exploit space resources to attack Earth. But thereby making sure that everyone is within reach of an event that would destroy all life on Earth.

My approach is to assume that nasty events will indeed happen with some frequency, and try to minimize how nasty they can be. If nanotech accidents and deliberate attacks do happen, but they're on the whole no more destructive than today's wars and natural catastorphies, I'm satisfied.

To some extent these are mutually exclusive approaches, unfortunately.

Chris Phoenix, CRN


Brett, one more comment: I disagree with most of the details--and more importantly, the level of detail--in point 7. Many of the failure modes will change. For example, with recycling of metabolic chemicals, you wouldn't have to stockpile any resources other than a few kWh per day of energy. With nanomedicine, you wouldn't have to worry nearly as much about bio attack.

There may be some nano-disasters that we have to make specific preparations for in advance. And it'll certainly be a good idea to build infrastructure that's not brittle. But the things that act as disasters will change so much that making specific plans is misleading.

Chris

John B

Quoth Brett at August 18, 2004 05:33 PM: "I don't think there IS any big, overarching solution. Just a lot of tiny little pieces that accumulate to something effective. And "effective" doesn't mean ironclad, it means good enough to keep the body count down. After all, people die today, of all sorts of causes... Why should we expect that to change, short of major alterations to the nature of humanity itself?"

I agree, there's no such thing as perfect security (except perhaps in a system with no operators - think Malthusian for a second.) As far as 'keeping the body count down', I agree that's pretty critical - however, I suspect that ONE boo-boo, even a non-fatal one (think Three Mile Island) will cause some truely massive setbacks for nanotechnology. IMO this could well lead to increased, 'oppressive' regulation and the formation of a thriving black-market or differential global availability of the technology.

This blackmarket scenario is something I think we all agree on, from conversations here and shamelessly eavesdropping elsewhere. I personally would consider it a major risk during the early adaptation of nanotech, and something we probably won't be able to use negative feedback controls on. (Tarriffs, least-favored-nation status, etc don't mean a whole lot if you're self sufficient, and either the sole source or one of the very few sources of the technology in question)

Brett continues: "Avoid anything in the way of design censorship which would cause people of reasonably good will to seek out a black market."

This to me is extremely problematic. One of the few possible controls /is/ banning specific avenues of research and development. The problem is how they're limited. If you ban self-replication, for instance (and one I don't think is too far fetched), you loose a GREAT deal of the benefits of nanotechnology. Not all of it, but still lots.

Brett's idea number two, "Flood the system with rigged "bad" designs..." seems to me to be a VERY bad idea. You'll be playing with the public perception of the reliability and safety of nanotech.

Also, you're now giving a technical challenge to the effectively sociopathic tech-craving population segment - "are you good enough to figure out what'll work and what won't?". If you look into the script-kiddie problems of the last decade (where some flooding of inferior designs *has* happened) I think you'll find that yes, there've been a lot of capture of dangerous people due to the inferior tools. However, I also posit that those who /do/ survive have an easier time doing so due to the availability of tools, regardless of how inferior, for intial prototypes.

The third idea, 'benign viruses', is another potential nightmare. You're releasing a viral patch. Hope you don't muck up the code, and that no one catches a whiff of it before you're fully deployed. It's IMO not a horrible idea in general - it's just that I can see it being more of a problem than a benefit in some circumstances. Precautionary principle? No, I don't think so - rather, cautious and careful advance on the problem.

Number five is quite good, except that I would add that the highest quality software should be submitted to and approved by multiple independent groups. if you rely on a single group, using a single method, you may well miss something. However, how can you rate the effectiveness of such a group? Over time, you can catch bug reports the various groups missed, and use that as a 'rating' method, perhaps - but short term, how can you determine which are valid attempts at trapping errors versus quick make-a-buck-by-approving-nanotech methods, especially as you're setting this up as a competition between them, giving them somewhat of a reason to keep their methods proprietary.

For #6 - what kinds of self-reliance devices do you have in mind? Solar cells (which could then be used to either starve crops or generate power for other nastiness)?

In short - anything beneficial /will/ have negative potentialities. How do you ensure not only that you build the right thing, but that your built results aren't abused in one way or another?

Number 10, 'avoid a monoculture', I think I side more with Brett on than with Chris. While monocultures are more easily controlled via negative feedback and heterocultures aren't easily controlled at all, having a diversity of approach (if there's some reasonable way to test validity of approaches) means you're probably less likely to run across unpleasant surprise.

Besides, I REALLY doubt (as commented on above) that this globe will be able to produce even a limited monoculture with regards to nanotechnology - we've not been able to handle anything else all together so far, what changes to allow us humans to do so now?

Thanks for the thoughts, Brett. Hope this is useful to you and all.

-John

John B

Responding to Chris' message of August 18, 2004 05:55 PM: "What CRN is proposing is perhaps analogous to an operating system, in which all the programs are restricted from accessing the lowest levels directly. OS's tend to impose complexity and inefficiency for the sake of formality and restriction."

Considering the degree of vulnerability in most any operating system you care to name, this isn't really all that reassuring. *wry grin* Especially since your goal is to prevent access to certain aspects of nanotechnological development which some group or groups of people on the face of this planet will be /very/ interested in developing!

Chris continues, "It might be possible to skip a step and deploy the equivalent of scripting languages, even on a system without memory protection. Scripting languages tend to impose simplicity and inefficiency for the sake of expressive power."

In which case, one of the early efforts will probably be to develop a compiler or other more efficient, less limiting 'scripting language' to handle some of the nastier bits of technology.

Another alternative is the generation of a toolset - CRNano's concept of 'blocks' of technology - which then can be assembled outside of machine control to make, perhaps, an uncontrolled nanofac.

In short, unless you're very careful in what patterns you allow, and extrapolate what each new piece can do as it comes to 'publication' in conjunction with all other previously known pieces, you'll eventually give enough rope for some bright boy or girl genius out there to hang themselves.

"Maybe, to continue the analogy, all the programmers would be too busy scripting cool new web applications to want to write in languages that can directly mess with the lower levels of the system.

In other words, one possible alternative (or supplement?) to the centralized-security model is to direct everyone's creative energy toward new domains rather than to messing with existing domains."

Sorry, that strikes me as playing the ostrich with its head in the sand. If you plan on positive results, your surprises are going to be nasty. In any group of programmers, you'll find SOMEone who's interested, either academically or otherwise, in 'breaking the system'. I really doubt that'll change in the short time frame we're talking about here for relatively simple diamondoid or silicon (or whatever) nanotechnologies.

"Would this solve the entire problem? No. Would it solve enough of the problem? That depends on how sensitive the lower levels are to direct imposition/embodyment of intellectual patterns."

True enough. Additional factors depends on the degree of accessibility people have to the technology, the acceptance curve of the new nanofactories, the global efforts to break such security devices as exist, etc. In short, it's a balancing act IMO between making it universally available and useful (and therefore subvertable) and making it unavailable with less usability (and therefore less subvertable.)

"This has been a distillation of a week's worth of very high-level theoretical thought. Feel free to ask questions."

Thanks for your reply - as usual, it's quite thought provoking. I hope the above is useful to your thoughts.

-John B

Chris Phoenix, CRN


Brett, on security holes and diversity:

Our goal is to minimize total nastiness. Where I think we disagree is on the patterns of nastiness that can happen with nanotech. Reread my analysis: If a hole can take out a neighborhood, then increased diversity in a single neighborhood only makes things worse.

Look at it this way: If a company had Macs and Linux on their intranet, would you recommend that they increase security by replacing 1/3 of their machines with Windows machines?

And you didn't answer my other point about compromised machines not being the source of compromising other machines (which is very unlike computers).

Chris

John B

Quoth Chris, "And you didn't answer my other point about compromised machines not being the source of compromising other machines (which is very unlike computers)."

I don't see how this is the case, Chris. Once one system is 'hacked', and capable of producing unregulated goods, why would the hackers in question not replicate it itself? Isn't this as much of a problem - exponential growth of 'compromise' - in both nanofactories and viral situations?

Of course, you've got much greater power and raw material problems with nanofactories. And one 'infected' or 'hacked' system can't infect other still-operant systems per se. However, the knowledge they gained in hacking the system /can/ be replicated quite easily, and that may be more of a viral vector along the computer model.

-JB

Chris Phoenix, CRN


John B's comments:

"If you ban self-replication, for instance (and one I don't think is too far fetched), you loose a GREAT deal of the benefits of nanotechnology. Not all of it, but still lots."

Good news: this understanding is outdated. Self-contained self-replicators are unnecessary at any stage of developing exponential manufacturing.

On approval groups: "how can you rate the effectiveness of such a group?" and "short term, how can you determine which are valid attempts at trapping errors versus quick make-a-buck-by-approving-nanotech methods"

Simple: don't give them much credit for approving, and give them lots of credit for finding problems. If they miss even a few problems, give them less and less credit for approving. In the short term, you don't have to know which groups are good, as long as you know that some good groups exist; the point is to catch problems.

"we've not been able to handle anything else all together so far," What about admiralty law?

And if you agree with Brett on monoculture, I'd also like to see you address my technical points on the difference between computer security and nanofactory security.

"ostrich with its head in the sand": this is frustrating: I suggest a supplemental approach designed to encourage creativity and non-coercively discourage nastiness, and am immediately criticized insultingly for being unrealistic, on the grounds that this won't solve all problems. But I explicitly said that it wouldn't! What's your goal here: to encourage me to improve, or to attack me for not being good enough?

Chris

John B

From Chris: "John B's comments: "If you ban self-replication, for instance (and one I don't think is too far fetched), you loose a GREAT deal of the benefits of nanotechnology. Not all of it, but still lots."

Good news: this understanding is outdated. Self-contained self-replicators are unnecessary at any stage of developing exponential manufacturing."

OK - are you talking about your nanofac design on the CRNano pages? If so, does it not have the capability to generate more copies of itself, given the right programming, power, and raw materials? Again, if so, how is this not self-replication? *confused look*

I'm sorry if I seem combative about this - that's not at all my intent. My position is that you're suggesting possible solutions, and I'm trying to provide feedback to end up with a (hopefully!) better solution by plugging what appear to me to be holes in the logic. I understand this is a work in progress, and I'm hoping that you see this as constructive criticism, not as a slam. My apologies if these comments are perceived as an attack.

That is not my intent.

"On approval groups -snip- Simple: don't give them much credit for approving, and give them lots of credit for finding problems."

OK - so you have no basis to trust any of them at the beginning? No authority to turn to, reassuring the populace with "This is reasonably safe" or "I think this is excessively risky"? Sounds to me that this may be a PR nightmare -"Look at all the problems they've found with this stuff! And no one is saying it's safe!"

Perhaps have multiple (3+) different organizations set up from the get-go, with deliberately different approaches to the technology. As new approaches are presented, have them work in conjunction with the 'proven' groups until they build a track record.

Of course, this leaves you vulnerable with regard to the initial groups, and setting up these groups is going to be a moderate-to-large investment in salaries and equipment. This might be possible with a governmental/university series of partnerships (MIT, CalTech, etc) or even possibly gov/edu/public partnerships, including Xyvex, Foresight, etc.

""we've not been able to handle anything else all together so far," What about admiralty law?"

Correct me if I'm wrong, but wasn't admiralty law imposed from above by the Spanish and Portugese and backed by the Catholic Church, later gathered under the aegis of the British when they took over the mantle of the primary naval power?

Even so, there's considerable disagreement with regards to salvage/underwater archaeology. There's also the whole Gulf of Sidra/"Gulf of Death" position that Libya tried to press forward during the 80's.

To address your position on computer versus nanofac security issues, let's go back to your comment from August 18, 2004 06:09 PM:"I tentatively disagree with point 10 (avoid monoculture). Diversity increases the chance of a security hole. If a security hole only caused local damage, I'd agree that this was a good tradeoff. But if a security hole takes out a neighborhood, then having multiple designs in each neighborhood makes things worse than monoculture, not better."

I agree that diversity increases the chance of a security hole, as more different approaches poke at the same 'code'. However, that same diversity also helps you (in Brett's concept of multiple bug-hunt groups) find those problems before they become issues.

Additionally, if you have a monoculture, you're potentially open to a single exploit paralyzing your whole infrastructure. With a heteroculture, bad things may happen more often, but with much smaller scale.

There *is* an issue of layers of complexity, however. It's an exponential-growth problem to add new technologies and techniques - new 'tools' to the nano-toolkit, if you prefer - as each needs to be considered not only by itself, but also in conjunction with the previously approved techniques and technologies. (For one example of what I mean, take a look at the introduction of compact disk writers on computers.)

""ostrich with its head in the sand": this is frustrating: I suggest a supplemental approach designed to encourage creativity and non-coercively discourage nastiness, and am immediately criticized insultingly for being unrealistic, on the grounds that this won't solve all problems. But I explicitly said that it wouldn't! What's your goal here: to encourage me to improve, or to attack me for not being good enough?"

Again, my apologies - apparently I wasn't sufficiently careful in addressing your position. I do not intend to slam your work - you folks are IMO one of the few organizations which are seriously addressing some of the non-technical issues involving nanotechnology, something I wholeheartedly support.

Hopefully this helps,
-John

Chris Phoenix, CRN


On monoculture, we may need to lay out the discussion more formally. There are several different classes of attacks with very different consequences.

With computers, a compromised computer can result in destructive attack (local denial of service); spying attack; and (remote) denial of service attacks. A compromised computer is vulnerable to the first two of these attacks and can be used to implement/spread any of them to other computers regardless of physical location.

So the damage done is more or less proportional to the number of infected computers; in fast attacks, the damage is some function (an exponent?) of the speed of multiplication, which is some function of the number of vulnerable computers.

With nanofactories, both the set of attacks and the means of spread are very different. I'm assuming that the means of spread is communication from computers, and that there'll be plenty of computer power available to implement whatever attack is found. Attacks include:
1) Shut down a nanofactory.
2) Make the nanofactory make locally nasty stuff. (This is the "take out a neighborhood" scenario. But note that this can also be spy stuff.)
3) Make the nanofactory make unrestricted nanofactories.

The first kind of attack is worse with a monoculture. The second and third are not. They are worse with a diversity. The only exception is if there are enough kinds of nanofactory that on average each kind will be used less than once within a damage radius--then the second attack will be mitigated, but the third will still be just as bad. Aside from that exception, the damage from these attacks increases proportionally to the chance of hacking *any one* nanofactory design.

Chris

Chris Phoenix, CRN


On admiralty law, it's not perfect, but almost everyone follows it.

On certifying orgs, public perception is a separate issue. In your scenario where "no one is saying it's safe," it's easy to manage by having one "official" org that's willing to say "Yes it's safe." Let's for now stick to the technical side. I don't think you've shown that a structure that punishes incorrect approval and rewards correct denial will fail to encourage finding problems.

Chris

Chris Phoenix, CRN


On self-replication, a set of blacksmith's tools can self-replicate--with some help from the blacksmith. The nanofactory would need a chemical supply, and a way of moving new nanofactories away from its output port, and a large supply of wall plugs and cooling systems.

Sure, someone could engineer a chassis with harvesting arms, and onboard chemical processing, and large cooling system, and lots of solar panels, and more robotics to manage all this, and enough smarts to navigate, and a design that wouldn't get blown over by the first breeze... and an onboard nanofactory... and this would be a self-replicating system. But the nanofactory would not be self-replicating any more than your stomach is.

Chris

John B

Posted by: Chris Phoenix, CRN | August 19, 2004 03:26 PM: "-snip- With computers, a compromised computer can result in destructive attack (local denial of service); spying attack; and (remote) denial of service attacks. A compromised computer is vulnerable to the first two of these attacks and can be used to implement/spread any of them to other computers regardless of physical location."

A large limiting factor on upper end viral replication is bandwidth. Additionally, for some situations (the virus that was doubling every 7 to 10 seconds back a year or so, for instance) ended up being shut down in large part due to people taking their systems off the net.

As I understand the scenario, both of these limiting factors will be applicable with regard to nanofactories as well.

"-snip- With nanofactories, both the set of attacks and the means of spread are very different. I'm assuming that the means of spread is communication from computers, and that there'll be plenty of computer power available to implement whatever attack is found. Attacks include:
1) Shut down a nanofactory.
2) Make the nanofactory make locally nasty stuff. (This is the "take out a neighborhood" scenario. But note that this can also be spy stuff.)
3) Make the nanofactory make unrestricted nanofactories."

Excellent points. Note also that locally-nasty stuff could include long-term sabotage devices such as slow disassemblers, which could quite possibly cause a lot of damage before their source is determined and countered.

"The first kind of attack is worse with a monoculture. The second and third are not. They are worse with a diversity. The only exception is if there are enough kinds of nanofactory that on average each kind will be used less than once within a damage radius--then the second attack will be mitigated, but the third will still be just as bad. Aside from that exception, the damage from these attacks increases proportionally to the chance of hacking *any one* nanofactory design."

I don't think I fully agree with you on this one. If I have, say, 20 nanofacs in a limited area, and only one of them 'goes bad', things aren't as dire as if all 20 go bad. That is, the heteroculture situation leads to potentially more survivability than the monoculture.

If we have 10 go bad out of the twenty, then things are pretty desperate, too - the 10 remaining nanofacs are going to need to be thrown into high production mode of defensive adaptations as quick as practical - during which time the 10 'bad' facs are spewing out their 'toxins'.

The third case is admittedly the sticky point for the heteroculture and the strongest point of the monoculture. With multiple types of nanofac, you have a greater chance of finding an exploit in proportion to the number of nanofac types out there (ignoring lack of maintenance or other like situations.)

"On admiralty law, it's not perfect, but almost everyone follows it."

Agreed. However, my point was that it was effectively imposed on the rest of the world by a small group. Trying to build consensus on these issues is going to be much more difficult than having it imposed, I fear.

"On certifying orgs, public perception is a separate issue. In your scenario where "no one is saying it's safe," it's easy to manage by having one "official" org that's willing to say "Yes it's safe." Let's for now stick to the technical side."

OK. (I do consider public opinion a very important part of the mix, but the tech side's important too.)

"I don't think you've shown that a structure that punishes incorrect approval and rewards correct denial will fail to encourage finding problems."

Alright. Let's try this, the second part first:

How can you say if something's a "correct denial"? You can say that the group found "concrete faults" with the technology - but if you go much further than this, you're giving away a lot of information as to your techniques for evaluating templates and your capabilities in finding the flaws.

Yes, it's a bit like security via obscurity, but it is also a pretty strong defensive posture. The problem is the public (or the other nanotechnology template creation organizations, at least) don't get the feedback as to what is allowable and what isn't. Therefore, it's a blind trust situation.

(If you postulate that the test results are publicly available, you're offering incremental knowledge to those who'd like to subvert the process as to your testing capabilities. This means a patient 'bad guy' could, over time, learn how to slip one past you, or how to make several innocuous bits which only become nasty when combined in some odd ratio, such as gunpowder perhaps Add in the right amounts of sulfur and saltpeter, and things go boom.)

Given as well that the different organizations may have different techniques, it is possible (I don't know how probable, but possible) that one organization which specializes in a given problem is placed in a position where they 'loose face' over several problems not in their area of expertise. Then along comes a later problem which does fall into their area of expertise, but they've been deprecated by their missing the other problems, leading to their warning perhaps not being taken seriously.

One way around this might be to allow the accrediting organizations to specify their area(s) of expertise, but this would also potentially open holes in the approval process - who wants to risk their reputation on the hardest of the challenges, like comparing the new technology in combination with all the other technologies in each possible combination?

(Is this post perceived as less offensive than the earlier one? I hope?)

-John

Brett Bellmore

A nanofactory is, for all intents and purposes, just a computer with a really, REALLY good printer attached to it. Malware which compromises the communications aspect WILL be able to spread exponentially, just like current viruses. From there, depending on system design, it may also be able to compromise the manufacturing system, (Forcing manufacture of certain products, or just ruining it.) or the design system, causing hostile features to be inserted into subsequently executed designs.

It could even alternate between such pathways, causing newly made nanofactories to have built in hardware level security holes.

So, faced with a monoculture, the malware doesn't have to produce products which themselves self-replicate, in order to pose an exponentially growing threat. Though without that capability, the level of the threat would saturate out eventually, when every identical machine was compromised, and the threat could be fought by simply yanking wall plugs.

But exploits, as I understand them, generally don't rely on the high level behavior of a system, (Unless somebody deliberately built in a backdoor, of course.) but instead peculiarities of the low level implementation. That's why I suggest that only the high level description of the control software be common, and it can be compiled in each machine using random variation in how the high level description is implemented. With a large piece of software, and even a couple alternatives for how each operation is implemented, each nanofactory's control software could be unique. Trying to write a virus to attack computers, all of which are uniquely impemented at the level of machine code, would be a REALLY difficult proposition, and hardly likely to be highly successful.

John B

"On self-replication, a set of blacksmith's tools can self-replicate--with some help from the blacksmith. The nanofactory would need a chemical supply, and a way of moving new nanofactories away from its output port, and a large supply of wall plugs and cooling systems.

Sure, someone could engineer a chassis with harvesting arms, and onboard chemical processing, and large cooling system, and lots of solar panels, and more robotics to manage all this, and enough smarts to navigate, and a design that wouldn't get blown over by the first breeze... and an onboard nanofactory... and this would be a self-replicating system. But the nanofactory would not be self-replicating any more than your stomach is."

Mmm. Good points. So what you seem to be positing is that industrial processes used in the here-and-now (just-in-time delivery, limited power capacity to normal needs, etc) will be useful to prevent a 'grey goo' of nanofactories?

Truth be told, I hadn't considered it that way. Thanks - one less worry. *smile*

-John

Chris Phoenix, CRN

John, on nanofactory monoculture: sounds like we're agreeing on a lot of points, or at least agreeing that the answers aren't necessarily to be found by comparison with computers.

Brett, on infection: I'm assuming that general-purpose computers will continue to be infected, and will be connected to nanofactories. So any exploit will be able to be piggybacked on a computer worm and sent to all nanofactories almost simultaneously. This is worst-case, but I think reasonable-case.

John, you raise a good question about verification, and bad guys learning to slip designs past based on test information. I'd say that in general, publication of bad design patterns should be encouraged--at least among all the certifying groups, so that they can maximize their skill; and also among trained/legit designers, so that they don't make accidental mistakes. Would bad guys be able to learn from that? In theory, yes--except that they wouldn't have much chance to learn, since a pattern of probing would get them scrutinized pretty hard.

John, on gray goo--I'm not suggesting that there's any need to prevent a gray goo of nanofactories. I'm suggesting that nanofactories are simply not connected to gray goo, so there's no need to "prevent" a gray goo of them.

Chris

todd

I must say this is one of the more complicated issues underlying the rollout of MNT. From the issues a security I cannot say definitively anything positive based on the numbers of viruses and security patches given out daily by a variety of companies it would appear that security is next to impossible to maintain. Also I would like to note who is going to occupy these positions of watchdog over the common man either allowing his useful product or restricting his useful product. And why are these individuals so privileged to be in a position of power over all men. Will these people be voted on and who will be voting. It seems to me unlikely that the average man living in Siberia, Ethiopia, or Iowa will possess the knowledge to make a rational decision on voting for this position. So we're left with some government organizations defining itself as the group in power. This is very troubling.

On another note given the large number of possible products that would be brought before this committee given many millions of individuals designing products for themselves and their families. I am wondering who is going to staff these positions and what sort of timing manner will they agree or disagree to the useful product. Using the tax system as they example I believe there are some 40,000 individuals working in the United States treasury department attempting to collect taxes from individuals living in United States. These individuals only submit in most cases one request per year to the committee. Looking at a conservative number of say 200 million individuals requesting products perhaps daily who is this organization possibly deal with the flood of useful product requests. Again as stated above may individual product may look benign in nature but when combined with other products becomes a considerable threat. So we're left with this group not only having to identify the individual product as being nonhazardous but also comparing it with every other products to arrive at a passing or declining conclusion.

On another note I would be interested in hearing if any of you would volunteer for this job given the probable overtime involved with the position. The overwhelming importance and stress one would receive from the position as you would not wish to make any sort of mistake during your comparisons. And the regrets you would feel if a product you past then destroyed a small city. I am puzzled with the idea of working in any job post molecular assembler but if choices were made I feel it very unlikely I would choose this job as a useful product design/exceptor. And you're going to need to locate several hundred thousand individuals with understanding of a wide variety of product and design and relationship to product and design of which today they're likely only a few qualified individuals for this position.

vikram

it is great to know such intresting things.thanx everyone for givin such opportunity

The comments to this entry are closed.