I'll start by quoting a comment that "John B" wrote in the "Massive Change" thread:
However, this is besides the point. We have here a small group of very intelligent, coherent posters, who're quite familiar with the technology's concepts and who all mostly agree that it's coming.Yet we can't agree on how to safely use it. Astounding.
The appropriate use and administration of immensely powerful technology will be very difficult. Those who are trying to come up with workable plans will need to examine all the options--and examine themselves.
Last night I got an important insight into a running argument I've had with Janessa over whether international administration can be part of a solution. I was responding to a discussion on Wise-Nano. Matt had posted a page "Law enforcement by AI" recommending that an "AI" be installed in people's spines, able to watch their actions and disable their muscles temporarily if they started to do something dangerous and illegal.
The proposal made me itch, and I responded rather sharply. Matt objected to my response, and I started to write a clearer explanation. I wrote:
If you ask the question, "Do we know how to do Step 1 [invent a superintelligence]? Is it something we can use as a plan?" Then I'd simply give the answer, "No, we have no clue how to do it or when it might be achieved. We can't count on it as a solution." The other problem is that Step 1 is likely to go wrong. If you ask, "If we tried to invent a safe superintelligence for the purpose of giving it intimate control over our lives, what's the chance we'd get it wrong and create a very icky situation?" My first reaction is, "We'd be babies playing with fire."
Then I noticed that this was very close to Janessa's argument against international administration. Do we know how to invent an international adminstration? Is it something we can use as a plan? Could we invent a safe administration and give it intimate control over our lives, or would we be likely to get it wrong and create a very icky situation?
Even the tone of my original response to Matt--You have no idea what you're talking about; even the proposal is dangerous because someone might think it meant something--is rather reminiscent of Janessa's tone on occasion.
I now have a lot more sympathy for Janessa's position.
Janessa and I both live in a country that was invented, and got it pretty darn close to right. Not perfect, by a long shot. But good enough to self-correct. We survived the Alien and Sedition Acts, the Civil War, and McCarthyism, among many others. We will survive Abu Ghraib.
But we also live in a world where the League of Nations failed, and the United Nations is frequently ignored by the world's most powerful country and distrusted by many of its citizens.
Unlike superintelligence, we do know something about governance. We know that "Democracy is the worst form of government except for all those others that have been tried." We know that separation of powers seems to work pretty well at providing checks and balances. We know that good governments can be invented, but it's hard to get it right.
What I wrote to Matt is at least partly applicable to CRN's call for international administration. "Without even the sketchiest architecture for this "AI," there's no way to evaluate what its effects will be or how reliably it will work. This is not yet a proposal, just a wish." But there is at least the beginnings of an architecture, in our Three Systems paper. Is it enough? No. Is it a good start? Maybe.
Should CRN be advocating an international administration at this point? It's a good question. The only excuse for proposing it even before it's defined is if 1) it would happen anyway, and it's better to try to guide it than let it develop unguided; or 2) not having some kind of coordinated administration would be intolerably destructive. It seems to me that 2) is the case; Tom Craver is making a good argument against that, but so far I have to disagree with him. And if we don't have an effective international administration, it seems likely that some powerful nation will feel that it can't be safe as long as people outside its influence have molecular manufacturing--and will create a global control-freak empire, which is likely to be worse (even for its own citizens) than a planned special-purpose administration.
These arguments should continue.
Chris
Look, I'm a mechanical engineer. I've got background in chemistry, electronics, mathematics, biology... And I've been following the subject of nanotech since before Engines of Creation. (Yes, there was nanotechnology speculation before Drexler; We just didn't call it nanotechnology, we called it "synthetic biology", and had only the haziest notion of how to accomplish it.) You give me a sharply defined physical problem, and I have a pretty good shot at either giving you a solution, or telling you you're out of luck.
That doesn't qualify me to design regulatory agencies. At THAT I'm an amateur. I wonder if there are any pros out there, and if they could be brought into the discussion?
Posted by: Brett Bellmore | December 15, 2004 at 08:04 AM
Hold your breath, I'll go see if I can find one. J/k
Posted by: Joe | December 15, 2004 at 09:24 AM
Staggering...reeling...some guy in a red suit with horns and a pitchfork just told me that his freezer appears to be severely malfunctioning...will get back to you later. :-)
Posted by: Janessa Ravenwood | December 15, 2004 at 10:25 AM
Brilliant! Never argue with anyone. Always agree with them and then explain to them how you and they have actually been saying the same things all along, just in different ways. Oh, and use thier name a lot. People like that.
Posted by: Mike Deering | December 15, 2004 at 12:28 PM
Its funny, back when I was studying sociology I thought of bureaucracies and markets as types of AI's.
Posted by: jim moore | December 15, 2004 at 04:13 PM
Matt: Regarding the Wise Nano “Law Enforcement by AI” page, I have some comments that I’m posting here as that page is becoming impossible to read – it’s difficult to tell who’s saying what.
I have just one question (well, in a few pieces) on the grounds that since I have no interest in that scheme deconstructing this scheme excessively would be a waste of time for me. It is this: what do you plan to do about the legion of people who, like me, want full MNT – and plan to get it – without having a puppet master implant installed in order to do so?
If given a choice between:
A) Getting a full nanofac and an AI puppet master implant.
or
B) Getting a full nanofac and NO AI puppet master implant.
Then I’m betting I’d have an easier job of marketing than you would. Do you really think you can keep full MNT away from us non-implanted people forever? If not, then when we get ahold of it why would anyone go for your scheme if they don’t have to? And of course all of this assumes that your mandatory-puppet-master group of people get it first and are in a position to implement this scheme. Otherwise everyone will just take the nanofacs and decline the puppet master implants. My reaction to this situation would be to raise the Jolly Roger and go for a bootstrapping project with like-minded people, which I’m sure I would have no trouble finding. So I guess I’m ultimately saying that your scheme is totally unenforceable.
BTW you said that regarding international agreements and organizations that I “alternately either exaggerate the extent of control or play down the importance of such agreements and organisations.” Please give me specific examples.
Posted by: Janessa Ravenwood | December 16, 2004 at 12:28 PM
Chris: In the case of the USA, I think more likely than creating "a global control-freak empire" is a policy of making spectacular examples of anyone who gets too blatant with wielding MNT. That'd be combined with a series of rewards for nations that follow good-neighbor MNT policies. Trying for a true global empire is more expensive than a democracy is willing to go. (All bets are off if the USA stops being a democracy.)
Brett: There's two sets of experts in designing regulartory agencies. The first wants to totally ban something and structures the agency to put as many obstacles in its way as possible. The other is making money doing something and wants the agency to approve the current practicies while making it impossible for new competitors to enter the market.
Posted by: Karl Gallagher | December 16, 2004 at 12:40 PM
A broader thought on the subject of global governance. The US Constitution and European Union brought together a set of sovereign entities into new, reasonably successful governments. The US Articles of Confederation, League of Nations and the UN tried to approximate that and failed. The big differences between the two approaches I can think of:
1. Formal acknowledgement of final authority resting in the new government.
2. Power within the new government shared roughly proportionally to the actual (military/economic/demographic) power of the components.
3. A process for admitting new bodies to the government that ensures no dangerous entities will be able to use the power of the central government against the rest. (cf. EU debates about offering membership to Turkey)
4. A process for holding the new government accountable to its citizens.
The fundamental conflict between global governance and effective government is that there's large chunks of the world that aren't qualified for citizenship by any rational standard, or even any standard you can get a majority of the West agree to. So you can have a good government that leaves parts of the world ungoverned or under military occupation. . . or you can have a global government that isn't good. The former doesn't seem worth the hassle to assemble it (at least for the purpose of regulating MNT), the latter I'm sworn to fight.
Posted by: Karl Gallagher | December 16, 2004 at 01:09 PM
Karl: I deliberately didn't say it would be the U.S. that would create the control-freak empire. We may not be the first nation to get MNT.
BTW, for the record, I deleted "Evelyn"'s comment "Wow, that was a lot of good information, thank you." because it appeared to be comment spam.
Chris
Posted by: Chris Phoenix, CRN | December 16, 2004 at 01:19 PM
I have some comments that I’m posting here as that page is becoming impossible to read – it’s difficult to tell who’s saying what.
Text in italics is by Chris Phoenix, bold text (that is not the original post) is from michael vassar, the original text and the rest is from me. The replies have been indented to indicate parent comment and thus the position in the hierarchy. Also, a quick look into the history tab can clear up matters further. But it´s true that wiki-style articles are usually not perfect to support deep discussion with multiple replies and re-replies.
If given a choice between:
A) Getting a full nanofac and an AI puppet master implant.
or
B) Getting a full nanofac and NO AI puppet master implant.
Then I’m betting I’d have an easier job of marketing than you would.
If you give people these options to choose from, after the right amount of advertising, you´re probably right. But then let me ask, what if A) was at least a little closer to my actual proposal?
Do you really think you can keep full MNT away from us non-implanted people forever? If not, then when we get ahold of it why would anyone go for your scheme if they don’t have to? And of course all of this assumes that your mandatory-puppet-master group of people get it first and are in a position to implement this scheme. Otherwise everyone will just take the nanofacs and decline the puppet master implants. My reaction to this situation would be to raise the Jolly Roger and go for a bootstrapping project with like-minded people, which I’m sure I would have no trouble finding. So I guess I’m ultimately saying that your scheme is totally unenforceable.
Please refer back to the "Massive Change" thread and the Wise-Nano article (here and here, respectively, in case you´ve lost the links). There you can find answers to most, if not all, of the questions and assumptions you repeated here.
BTW you said that regarding international agreements and organizations that I “alternately either exaggerate the extent of control or play down the importance of such agreements and organisations.” Please give me specific examples.
Am I misrepresenting your opinion, as displayed by your comment posts? If the answer is "yes", I´ll stop considering this request a kind of joke and come up with examples as you request. But please don´t waste my time if the answer is something else.
I have just one question (well, in a few pieces) on the grounds that since I have no interest in that scheme deconstructing this scheme excessively would be a waste of time for me
Then I might suggest you only criticize my actual proposal and not something I never proposed but that you have read into it. It´s not only your time that´s being wasted this way.
Posted by: Matt | December 16, 2004 at 05:29 PM
Matt:
A) Re the WN page - if you say so, I don't find it easy to read at all and I don't think I'll try much anymore.
B) Re us non-AI rebels - you just said "That´s a different problem." and then later on referenced the IAEA (which is toothless, just ask Kim Jong-Il, and I couldn’t read that article you referenced). I was looking for a slightly more specific answer. I mean, really, if the RIAA and the MPAA can't stop song and movie file trading and the DEA can't stop the sale and possession of drugs I'm laughing at the thought of some entity trying to stop me from getting an unrestricted nanofac (which I really do fully intend to do as soon as I can).
C) Re my wanting examples - yes, list them and I'll comment and/or clarify as necessary. If there’s a conflict between my stated views I would like the chance to examine it.
D) Re wasting your time - hey, it's real simple. Not liking, not typing. I didn't bother to comment on the specifics of your concept as I don't see it as something that will ever truly concern me (I see this whole debate on the AI puppet masters as an academic exercise as there’s no way you could ever implement it in reality) and you're free not to comment at all on my posts if you don't want to.
Posted by: Janessa Ravenwood | December 16, 2004 at 06:03 PM
A) The problem is: You don´t care to read or understand the proposal because it´d be a waste of time for you (which is ok for me), still you seem to have time enough for deliberately misrepresenting and criticizing it on that basis (which is not ok for me).
C) Let´s see. I went through every post accessible from the main page. As you may or may not remember, similar statements from you appear on a regular basis.
Wisdom isn´t easy:
"IAEA (which is toothless,[...])"
Super-Weapons and Global Administration:
"(and the U.N. is an enemy, not an ally)"
" Our veto is the one reason I recommend keeping us in the U.N. – keep your friends close and your enemies closer."
Guns or Butter:
"Well, that is pretty much CRN's MO - try to scare people into going along with their plan to create this global administrative body that will rule us all."
"I firmly believe, having considered the parameters of such a scenario, that such a global nanotech administration would necessarily end up ruling the world in very short order."
"The other problem being that the case in point - the IAEA - is toothless and ineffective"
"Janessa, I'm sorry that any mention of supranational government makes you so paranoid" [That one´s from Chris, not you, but it sums it up pretty well]
D) I just find it unfair to drastically misrepresent my work as you did, I won´t let plain wrong statements stand as they are. I do´t know how useful or realistic my proposal can be in effect, nonetheless I don´t want to see its words and implications twisted and the idea teared apart on this twisted basis.
Posted by: Matt | December 17, 2004 at 02:29 AM
A)The problem is: You don´t care to read or understand the proposal because tied be a waste of time for you (which is ok for me), still you seem to have time enough for deliberately misrepresenting and criticizing it on that basis (which is not ok for me).
-----
I *have* read it , it’s just damn *hard* to read at this point and I don’t want to argue the minutiae of it as I so no mechanism of actual implementation in the real world. Like Chris said, it’s a wish not a plan. However, hang on, I’ll go plow through it yet again. [10 minutes later.] No, I’m still not clear on your concept of actual *enforcement* of this idea, mainly because you haven’t given it. I see no practical measures to stop me from complete circumvention (getting full MM and never having your device implanted). As for misrepresenting the idea I stand by my assertion that your device is a “puppet master implant” as it is:
1) Implanted
2) Has the ability to override control of the implantee’s limbs if it wants to.
I’m not seeing you effectively disputing this definition other than essentially saying “no it’s not.”
Again, on the implementation this is reminding me of an old episode of South Park:
Phase 1: Steal Underpants
Phase 2: [blank]
Phase 3: Profit!
I see point A, I see point C, I’m seeing no sign of point B. A wish, not a plan.
B) [blank]
-----
No ideas on practical enforcement measures? Thought so, long live the nano-underground!
C) Let’s see. I went through every post accessible from the main page. As you may or may not remember, similar statements from you appear on a regular basis.
Wisdom isn’t easy:
"IAEA (which is toothless,[...])"
-
Super-Weapons and Global Administration:
"(and the U.N. is an enemy, not an ally)"
" Our veto is the one reason I recommend keeping us in the U.N. – keep your friends close and your enemies closer."
-
Guns or Butter:
"Well, that is pretty much CRN's MO - try to scare people into going along with their plan to create this global administrative body that will rule us all."
"I firmly believe, having considered the parameters of such a scenario, that such a global nanotech administration would necessarily end up ruling the world in very short order."
"The other problem being that the case in point - the IAEA - is toothless and ineffective"
"Janessa, I'm sorry that any mention of supranational government makes you so paranoid" [That one’s from Chris, not you, but it sums it up pretty well]
-----
No problem, let me clarify. The vast majority of international organizations are effectively weak, even if they’re not so on paper. If we were to create a new one, it would *probably* be weak as well, and thus pretty ineffective. One of the reasons they are weak is because we (the US) disregard or hamstring them and it’s generally important that we keep doing this, thus staying in the UN is good for us because of our veto. Also, having the UN on US soil makes it easier for our intelligence services to spy on foreign diplomats (or blackmail or recruit them if we can). In some cases, like the IAEA, that is a double-edged sword; that is, it’s ultimately better to try our best to keep them weak (at least concerning us) so that they can’t be used against us even if the consequence is that they’re ineffective against the likes of North Korea, Iran, Pakistan, etc. This doesn’t detract from my point, it proves it. If a new nanotech regulatory body is created, dollars to doughnuts says it’ll end up as part of the UN. As part of the UN it’ll automatically *start out* as a corrupt agency and we’ll hamstring it as usual anyway unless parts of it suit our needs and don’t effectively hinder us.
Now, what *worries* me is some of CRN’s proposals (and the proposals of others here) that concern the creation of a *strong* international agency with teeth. I don’t give such proposals more than infinitesimal odds of ever occurring (I give your AI puppet master proposal a flat 0% chance of ever becoming domestic or international law, so I’m not worried about that one), but on the off chance that someone might actually take them seriously it’s better to deconstruct and denounce them now so as to nip them in the bud and thus continue to protect US sovereignty and at least our current level of freedoms.
D) I just find it unfair to drastically misrepresent my work as you did, I won’t let plain wrong statements stand as they are. I don’t know how useful or realistic my proposal can be in effect, nonetheless I don’t want to see its words and implications twisted and the idea teared apart on this twisted basis.
-----
As I said, *how* am I misrepresenting it? It’s an implant, it can override control of an implantee’s limbs, and I see no effective measures to stop me from complete circumvention; in particular I see nothing that would stop a group of non-implantee’s (like me) from a covert bootstrapping project on our own without access to implantee-only resources. Again, if you can’t stop file-sharing and drug sales, you’ll never stop me from sharing “The Dummies Guide to Unrestricted Nanofac Construction” or for that matter “The Dummies Guide to Hacking Your AI Puppet Master Implant“ on Kazaa, BitTorrent, FreeNet, or some other P2P network.
Ultimately, your proposal is “take away free will from humans,” so of course I and a lot of other people will object to it. That’s a misrepresentation, you say? Not at all. Right now I can *choose* whether or not to commit antisocial acts. If I do so, I must face the repercussions, but it’s *still* my choice. Under your scheme, implanted humans no longer have this choice. But they don’t have to sign up, you say? Yes, but if they don’t they’re relegated to effectively second-class citizens. A Faustian choice indeed. Hence, I’m ultimately arguing for free will while you’re saying that that’s too dangerous for people to be allowed to possess, much less exercise.
Posted by: Janessa Ravenwood | December 17, 2004 at 11:07 AM
Janessa, no more replies to you from me. I hate talking against brick walls. Give me back my engagement ring.
Posted by: Matt | December 17, 2004 at 05:11 PM
Janessa: "on the off chance that someone might actually take [CRN's proposals] seriously it’s better to deconstruct and denounce them now so as to nip them in the bud"
You have identified a very important distinction. "Deconstruct" and "denounce" are very different.
Deconstruction is fine. If you can show that A could lead to B and B leads to C and C is bad, great! We can start thinking about how to tweak A to avoid B, or what alternatives to A we can find. That's very constructive.
Denouncing, I think, is exactly what I object to in some of your postings here. I don't think it's possible to simultaneously denounce something and communicate with its supporters. And this space is for communication, not soapboxing.
The "I must denounce this" mindset is not constructive. Matt was right to complain about my first attempt at criticism. In future, I intend to watch for that mindset in myself--"Bad things could happen if people read this, so I must show how stupid it is." And to choose a different mindset before I write anything. It is possible to show what's wrong with an idea without rabble-rousing your readers to reject it emotionally.
In short: Discussion good; disagreement good; demagoguery bad.
Chris
Posted by: Chris Phoenix, CRN | December 18, 2004 at 01:33 PM
Matt: I have to agree with Janessa that anything which can seize control of someone's limbs--even if just to paralyze them--deserves to be called "puppet master."
Also--please clarify how much technology would be forbidden to non-implanted people. Will they have access to general-purpose computers? Piezoelectric ceramics? Photoresist chemicals and/or the recipes with which to make them? And how do you plan to enforce whatever restrictions you impose? In short, exactly how do you plan to prevent non-implanted people from developing their own nanofactory technology?
Chris
Posted by: Chris Phoenix, CRN | December 18, 2004 at 01:37 PM
Another vote here for the puppet master title. Maybe you could make the concept less creepy by just implimenting Williamson's "Humaniods", instead?
Posted by: Brett Bellmore | December 18, 2004 at 01:57 PM
Chris: OK, how's "deconstruct and thereby discredit such ideas" grab you?
Posted by: Janessa Ravenwood | December 18, 2004 at 03:32 PM
I think the whole discussion of AI puppet-masters is moot. In the key period of most concern for this forum - the period of transition to wide availability of molecular manufacturing able to produce just about any physical object - AI won't be good enough. Even if it were, such a scheme could not be implemented fast enough to be useful.
Maybe it'd be possible to rapidly implement the "everyone gets a stipend" scheme - though frankly I think the desirability of that is very debateable. So why not focus the debate on that instead of the improbable puppetmaster or AI government thing?
Posted by: Tom Craver | December 19, 2004 at 01:07 AM
I second Tom's motion. Though let's proceed on a new thread.
Posted by: Michael Vassar | December 19, 2004 at 07:35 AM
It's funny, Tom; I think CRN's regulatory schemes are moot for exactly the same reason: By the time the world's governments take this whole thing seriously enough to actually be willing to DO something along those lines, it would be too late to work out all the details, and get it implimented.
Posted by: Brett Bellmore | December 19, 2004 at 08:38 AM
Brett: I agree that it is possible MM could evolve that fast - and some days I think that might be the best possibility. But there are a number of ways that governments could have time to think things through and make plans.
A government could develop it in secret in a Manhattan style project. If this scenario proves correct, it is very likely that the government will have very detailed plans in place - probably involving maintaining a monopoly position with an iron fist lightly concealed inside a regulatory glove.
It may take several years to advance from the first primitive assembler that publically proves the principle convincingly, to an assembler capable of duplicating itself - setting off a global race to get there first, or at least not too far behind everyone else. Such a race might make it more difficult to cooperate - but also make it clear to governments that they'd better try to understand the issues and get something in place fast, or risk seeing it run out of control.
Posted by: Tom Craver | December 19, 2004 at 10:21 AM