• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« Real Robots | Main | Planning Ahead »

December 15, 2004

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Brett Bellmore

Look, I'm a mechanical engineer. I've got background in chemistry, electronics, mathematics, biology... And I've been following the subject of nanotech since before Engines of Creation. (Yes, there was nanotechnology speculation before Drexler; We just didn't call it nanotechnology, we called it "synthetic biology", and had only the haziest notion of how to accomplish it.) You give me a sharply defined physical problem, and I have a pretty good shot at either giving you a solution, or telling you you're out of luck.

That doesn't qualify me to design regulatory agencies. At THAT I'm an amateur. I wonder if there are any pros out there, and if they could be brought into the discussion?

Joe

Hold your breath, I'll go see if I can find one. J/k

Janessa Ravenwood

Staggering...reeling...some guy in a red suit with horns and a pitchfork just told me that his freezer appears to be severely malfunctioning...will get back to you later. :-)

Mike Deering

Brilliant! Never argue with anyone. Always agree with them and then explain to them how you and they have actually been saying the same things all along, just in different ways. Oh, and use thier name a lot. People like that.

jim moore

Its funny, back when I was studying sociology I thought of bureaucracies and markets as types of AI's.

Janessa Ravenwood

Matt: Regarding the Wise Nano “Law Enforcement by AI” page, I have some comments that I’m posting here as that page is becoming impossible to read – it’s difficult to tell who’s saying what.

I have just one question (well, in a few pieces) on the grounds that since I have no interest in that scheme deconstructing this scheme excessively would be a waste of time for me. It is this: what do you plan to do about the legion of people who, like me, want full MNT – and plan to get it – without having a puppet master implant installed in order to do so?

If given a choice between:

A) Getting a full nanofac and an AI puppet master implant.
or
B) Getting a full nanofac and NO AI puppet master implant.

Then I’m betting I’d have an easier job of marketing than you would. Do you really think you can keep full MNT away from us non-implanted people forever? If not, then when we get ahold of it why would anyone go for your scheme if they don’t have to? And of course all of this assumes that your mandatory-puppet-master group of people get it first and are in a position to implement this scheme. Otherwise everyone will just take the nanofacs and decline the puppet master implants. My reaction to this situation would be to raise the Jolly Roger and go for a bootstrapping project with like-minded people, which I’m sure I would have no trouble finding. So I guess I’m ultimately saying that your scheme is totally unenforceable.

BTW you said that regarding international agreements and organizations that I “alternately either exaggerate the extent of control or play down the importance of such agreements and organisations.” Please give me specific examples.

Karl Gallagher

Chris: In the case of the USA, I think more likely than creating "a global control-freak empire" is a policy of making spectacular examples of anyone who gets too blatant with wielding MNT. That'd be combined with a series of rewards for nations that follow good-neighbor MNT policies. Trying for a true global empire is more expensive than a democracy is willing to go. (All bets are off if the USA stops being a democracy.)

Brett: There's two sets of experts in designing regulartory agencies. The first wants to totally ban something and structures the agency to put as many obstacles in its way as possible. The other is making money doing something and wants the agency to approve the current practicies while making it impossible for new competitors to enter the market.

Karl Gallagher

A broader thought on the subject of global governance. The US Constitution and European Union brought together a set of sovereign entities into new, reasonably successful governments. The US Articles of Confederation, League of Nations and the UN tried to approximate that and failed. The big differences between the two approaches I can think of:

1. Formal acknowledgement of final authority resting in the new government.
2. Power within the new government shared roughly proportionally to the actual (military/economic/demographic) power of the components.
3. A process for admitting new bodies to the government that ensures no dangerous entities will be able to use the power of the central government against the rest. (cf. EU debates about offering membership to Turkey)
4. A process for holding the new government accountable to its citizens.

The fundamental conflict between global governance and effective government is that there's large chunks of the world that aren't qualified for citizenship by any rational standard, or even any standard you can get a majority of the West agree to. So you can have a good government that leaves parts of the world ungoverned or under military occupation. . . or you can have a global government that isn't good. The former doesn't seem worth the hassle to assemble it (at least for the purpose of regulating MNT), the latter I'm sworn to fight.

Chris Phoenix, CRN

Karl: I deliberately didn't say it would be the U.S. that would create the control-freak empire. We may not be the first nation to get MNT.

BTW, for the record, I deleted "Evelyn"'s comment "Wow, that was a lot of good information, thank you." because it appeared to be comment spam.

Chris

Matt

I have some comments that I’m posting here as that page is becoming impossible to read – it’s difficult to tell who’s saying what.

Text in italics is by Chris Phoenix, bold text (that is not the original post) is from michael vassar, the original text and the rest is from me. The replies have been indented to indicate parent comment and thus the position in the hierarchy. Also, a quick look into the history tab can clear up matters further. But it´s true that wiki-style articles are usually not perfect to support deep discussion with multiple replies and re-replies.

If given a choice between:

A) Getting a full nanofac and an AI puppet master implant.
or
B) Getting a full nanofac and NO AI puppet master implant.

Then I’m betting I’d have an easier job of marketing than you would.

If you give people these options to choose from, after the right amount of advertising, you´re probably right. But then let me ask, what if A) was at least a little closer to my actual proposal?

Do you really think you can keep full MNT away from us non-implanted people forever? If not, then when we get ahold of it why would anyone go for your scheme if they don’t have to? And of course all of this assumes that your mandatory-puppet-master group of people get it first and are in a position to implement this scheme. Otherwise everyone will just take the nanofacs and decline the puppet master implants. My reaction to this situation would be to raise the Jolly Roger and go for a bootstrapping project with like-minded people, which I’m sure I would have no trouble finding. So I guess I’m ultimately saying that your scheme is totally unenforceable.

Please refer back to the "Massive Change" thread and the Wise-Nano article (here and here, respectively, in case you´ve lost the links). There you can find answers to most, if not all, of the questions and assumptions you repeated here.

BTW you said that regarding international agreements and organizations that I “alternately either exaggerate the extent of control or play down the importance of such agreements and organisations.” Please give me specific examples.

Am I misrepresenting your opinion, as displayed by your comment posts? If the answer is "yes", I´ll stop considering this request a kind of joke and come up with examples as you request. But please don´t waste my time if the answer is something else.

I have just one question (well, in a few pieces) on the grounds that since I have no interest in that scheme deconstructing this scheme excessively would be a waste of time for me

Then I might suggest you only criticize my actual proposal and not something I never proposed but that you have read into it. It´s not only your time that´s being wasted this way.

Janessa Ravenwood

Matt:
A) Re the WN page - if you say so, I don't find it easy to read at all and I don't think I'll try much anymore.
B) Re us non-AI rebels - you just said "That´s a different problem." and then later on referenced the IAEA (which is toothless, just ask Kim Jong-Il, and I couldn’t read that article you referenced). I was looking for a slightly more specific answer. I mean, really, if the RIAA and the MPAA can't stop song and movie file trading and the DEA can't stop the sale and possession of drugs I'm laughing at the thought of some entity trying to stop me from getting an unrestricted nanofac (which I really do fully intend to do as soon as I can).
C) Re my wanting examples - yes, list them and I'll comment and/or clarify as necessary. If there’s a conflict between my stated views I would like the chance to examine it.
D) Re wasting your time - hey, it's real simple. Not liking, not typing. I didn't bother to comment on the specifics of your concept as I don't see it as something that will ever truly concern me (I see this whole debate on the AI puppet masters as an academic exercise as there’s no way you could ever implement it in reality) and you're free not to comment at all on my posts if you don't want to.

Matt

A) The problem is: You don´t care to read or understand the proposal because it´d be a waste of time for you (which is ok for me), still you seem to have time enough for deliberately misrepresenting and criticizing it on that basis (which is not ok for me).

C) Let´s see. I went through every post accessible from the main page. As you may or may not remember, similar statements from you appear on a regular basis.

Wisdom isn´t easy:

"IAEA (which is toothless,[...])"

Super-Weapons and Global Administration:

"(and the U.N. is an enemy, not an ally)"
" Our veto is the one reason I recommend keeping us in the U.N. – keep your friends close and your enemies closer."

Guns or Butter:

"Well, that is pretty much CRN's MO - try to scare people into going along with their plan to create this global administrative body that will rule us all."

"I firmly believe, having considered the parameters of such a scenario, that such a global nanotech administration would necessarily end up ruling the world in very short order."

"The other problem being that the case in point - the IAEA - is toothless and ineffective"

"Janessa, I'm sorry that any mention of supranational government makes you so paranoid" [That one´s from Chris, not you, but it sums it up pretty well]

D) I just find it unfair to drastically misrepresent my work as you did, I won´t let plain wrong statements stand as they are. I do´t know how useful or realistic my proposal can be in effect, nonetheless I don´t want to see its words and implications twisted and the idea teared apart on this twisted basis.

Janessa Ravenwood

A)The problem is: You don´t care to read or understand the proposal because tied be a waste of time for you (which is ok for me), still you seem to have time enough for deliberately misrepresenting and criticizing it on that basis (which is not ok for me).
-----
I *have* read it , it’s just damn *hard* to read at this point and I don’t want to argue the minutiae of it as I so no mechanism of actual implementation in the real world. Like Chris said, it’s a wish not a plan. However, hang on, I’ll go plow through it yet again. [10 minutes later.] No, I’m still not clear on your concept of actual *enforcement* of this idea, mainly because you haven’t given it. I see no practical measures to stop me from complete circumvention (getting full MM and never having your device implanted). As for misrepresenting the idea I stand by my assertion that your device is a “puppet master implant” as it is:
1) Implanted
2) Has the ability to override control of the implantee’s limbs if it wants to.

I’m not seeing you effectively disputing this definition other than essentially saying “no it’s not.”

Again, on the implementation this is reminding me of an old episode of South Park:

Phase 1: Steal Underpants
Phase 2: [blank]
Phase 3: Profit!

I see point A, I see point C, I’m seeing no sign of point B. A wish, not a plan.


B) [blank]
-----
No ideas on practical enforcement measures? Thought so, long live the nano-underground!


C) Let’s see. I went through every post accessible from the main page. As you may or may not remember, similar statements from you appear on a regular basis.

Wisdom isn’t easy:
"IAEA (which is toothless,[...])"
-
Super-Weapons and Global Administration:
"(and the U.N. is an enemy, not an ally)"
" Our veto is the one reason I recommend keeping us in the U.N. – keep your friends close and your enemies closer."
-
Guns or Butter:
"Well, that is pretty much CRN's MO - try to scare people into going along with their plan to create this global administrative body that will rule us all."
"I firmly believe, having considered the parameters of such a scenario, that such a global nanotech administration would necessarily end up ruling the world in very short order."
"The other problem being that the case in point - the IAEA - is toothless and ineffective"
"Janessa, I'm sorry that any mention of supranational government makes you so paranoid" [That one’s from Chris, not you, but it sums it up pretty well]
-----
No problem, let me clarify. The vast majority of international organizations are effectively weak, even if they’re not so on paper. If we were to create a new one, it would *probably* be weak as well, and thus pretty ineffective. One of the reasons they are weak is because we (the US) disregard or hamstring them and it’s generally important that we keep doing this, thus staying in the UN is good for us because of our veto. Also, having the UN on US soil makes it easier for our intelligence services to spy on foreign diplomats (or blackmail or recruit them if we can). In some cases, like the IAEA, that is a double-edged sword; that is, it’s ultimately better to try our best to keep them weak (at least concerning us) so that they can’t be used against us even if the consequence is that they’re ineffective against the likes of North Korea, Iran, Pakistan, etc. This doesn’t detract from my point, it proves it. If a new nanotech regulatory body is created, dollars to doughnuts says it’ll end up as part of the UN. As part of the UN it’ll automatically *start out* as a corrupt agency and we’ll hamstring it as usual anyway unless parts of it suit our needs and don’t effectively hinder us.

Now, what *worries* me is some of CRN’s proposals (and the proposals of others here) that concern the creation of a *strong* international agency with teeth. I don’t give such proposals more than infinitesimal odds of ever occurring (I give your AI puppet master proposal a flat 0% chance of ever becoming domestic or international law, so I’m not worried about that one), but on the off chance that someone might actually take them seriously it’s better to deconstruct and denounce them now so as to nip them in the bud and thus continue to protect US sovereignty and at least our current level of freedoms.


D) I just find it unfair to drastically misrepresent my work as you did, I won’t let plain wrong statements stand as they are. I don’t know how useful or realistic my proposal can be in effect, nonetheless I don’t want to see its words and implications twisted and the idea teared apart on this twisted basis.
-----
As I said, *how* am I misrepresenting it? It’s an implant, it can override control of an implantee’s limbs, and I see no effective measures to stop me from complete circumvention; in particular I see nothing that would stop a group of non-implantee’s (like me) from a covert bootstrapping project on our own without access to implantee-only resources. Again, if you can’t stop file-sharing and drug sales, you’ll never stop me from sharing “The Dummies Guide to Unrestricted Nanofac Construction” or for that matter “The Dummies Guide to Hacking Your AI Puppet Master Implant“ on Kazaa, BitTorrent, FreeNet, or some other P2P network.

Ultimately, your proposal is “take away free will from humans,” so of course I and a lot of other people will object to it. That’s a misrepresentation, you say? Not at all. Right now I can *choose* whether or not to commit antisocial acts. If I do so, I must face the repercussions, but it’s *still* my choice. Under your scheme, implanted humans no longer have this choice. But they don’t have to sign up, you say? Yes, but if they don’t they’re relegated to effectively second-class citizens. A Faustian choice indeed. Hence, I’m ultimately arguing for free will while you’re saying that that’s too dangerous for people to be allowed to possess, much less exercise.

Matt

Janessa, no more replies to you from me. I hate talking against brick walls. Give me back my engagement ring.

Chris Phoenix, CRN


Janessa: "on the off chance that someone might actually take [CRN's proposals] seriously it’s better to deconstruct and denounce them now so as to nip them in the bud"

You have identified a very important distinction. "Deconstruct" and "denounce" are very different.

Deconstruction is fine. If you can show that A could lead to B and B leads to C and C is bad, great! We can start thinking about how to tweak A to avoid B, or what alternatives to A we can find. That's very constructive.

Denouncing, I think, is exactly what I object to in some of your postings here. I don't think it's possible to simultaneously denounce something and communicate with its supporters. And this space is for communication, not soapboxing.

The "I must denounce this" mindset is not constructive. Matt was right to complain about my first attempt at criticism. In future, I intend to watch for that mindset in myself--"Bad things could happen if people read this, so I must show how stupid it is." And to choose a different mindset before I write anything. It is possible to show what's wrong with an idea without rabble-rousing your readers to reject it emotionally.

In short: Discussion good; disagreement good; demagoguery bad.

Chris

Chris Phoenix, CRN


Matt: I have to agree with Janessa that anything which can seize control of someone's limbs--even if just to paralyze them--deserves to be called "puppet master."

Also--please clarify how much technology would be forbidden to non-implanted people. Will they have access to general-purpose computers? Piezoelectric ceramics? Photoresist chemicals and/or the recipes with which to make them? And how do you plan to enforce whatever restrictions you impose? In short, exactly how do you plan to prevent non-implanted people from developing their own nanofactory technology?

Chris

Brett Bellmore

Another vote here for the puppet master title. Maybe you could make the concept less creepy by just implimenting Williamson's "Humaniods", instead?

Janessa Ravenwood

Chris: OK, how's "deconstruct and thereby discredit such ideas" grab you?

Tom Craver

I think the whole discussion of AI puppet-masters is moot. In the key period of most concern for this forum - the period of transition to wide availability of molecular manufacturing able to produce just about any physical object - AI won't be good enough. Even if it were, such a scheme could not be implemented fast enough to be useful.

Maybe it'd be possible to rapidly implement the "everyone gets a stipend" scheme - though frankly I think the desirability of that is very debateable. So why not focus the debate on that instead of the improbable puppetmaster or AI government thing?

Michael Vassar

I second Tom's motion. Though let's proceed on a new thread.

Brett Bellmore

It's funny, Tom; I think CRN's regulatory schemes are moot for exactly the same reason: By the time the world's governments take this whole thing seriously enough to actually be willing to DO something along those lines, it would be too late to work out all the details, and get it implimented.

Tom Craver

Brett: I agree that it is possible MM could evolve that fast - and some days I think that might be the best possibility. But there are a number of ways that governments could have time to think things through and make plans.

A government could develop it in secret in a Manhattan style project. If this scenario proves correct, it is very likely that the government will have very detailed plans in place - probably involving maintaining a monopoly position with an iron fist lightly concealed inside a regulatory glove.

It may take several years to advance from the first primitive assembler that publically proves the principle convincingly, to an assembler capable of duplicating itself - setting off a global race to get there first, or at least not too far behind everyone else. Such a race might make it more difficult to cooperate - but also make it clear to governments that they'd better try to understand the issues and get something in place fast, or risk seeing it run out of control.

The comments to this entry are closed.