• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed

  • Powered by FeedBlitz

« Lunch with Gorbachev | Main | Nano Research Future? »

April 21, 2005


Feed You can follow this conversation by subscribing to the comment feed for this post.

Tom Mazanec

Interestingly, many geologists think we will pass Peak Oil and enter declining petroleum production within ten years. In our growth based economy, this could trigger stresses leading to a Greater Depression, war and a Collapse worse than the Fall of Rome. We may be in for a "negative Singularity" if we do not develop MM and MNT by 2020 at the latest. We cannot gain time by slowing down progress, and indeed most of Africa and parts of Latin America and South Asia are already entering decline. Even the growth in the rest of the world is already being fueled by a stressing of our natural environment such as the planet has seen only 5 or 6 times before, and not since 65,000,000 BC. I would not enjoy my old age if it has to be spent in a world ravaged worse than the Great Dying which ended the Permian, even if 8 or 9 billion people enjoy Western standards of life.

Brett Bellmore

Oil is scarcely the only fossil fuel around, and if oil prices were to go even 20-30% higher on a sustained basis, synthetic fuel from coal, (Which is far more abundant... Just a bit less convenient to use.) would become economically feasible. Though I suspect we'd be better off burning the coal in powerplants, where the CO2 would be more easilly dealt with, and switching to an energy economy based on electricity and non-carbon synthetic fuels such as powerballs.

Mike Treder, CRN

As a matter of full disclosure -- and fun speculation! -- I should state that I am on record at another website (http://www.incipientposthuman.com/upgrades.htm#Interface) stating that a "non-human entity will be able to pass the Turing Test well before 2029." That's not 2015, of course, but it's sooner than many predict.

The opening of Cyc to public interaction could prove to be a crucial step. On the website mentioned above, I wrote, "The being that passes the [Turing] test will be much more than what we think of today as a computer. It will be an amalgam of artificial intelligence, robot, and distributed network."

I also wrote, "This embodied AI will learn not only from direct interaction with humans and full immersion human interaction available over the web, but also from its own subjective experience of being in the world. It will have the ability to independently sense the difference between hot and cold, wet and dry, loud and soft, bright and dark, gentle and rough, polite and rude.

"In many ways, these entities will be like human children, discovering the world anew and with their own unique perspective. The act of cataloging such experiences and developing patterns of recognition, reaction, and response may in fact result in a kind of emotion."

Most of this was written in 2002, I think, but I stand by these pronouncements.

Brett Bellmore

Considering that humans have an inherent advantage in passing the Turing test: We ARE human, and the test requires that you pass for human. A non-human AI capable of passing the test would probably have to be considerably smarter in some ways than a human, as it would have to be capable of accurately [i]pretending[/i] to be something it wasn't.

Tom Craver

I see you're still using the term "nano-anarchy".

Do you also speak of "lathe anarchy" (terrorists can use them to make bullets and guns and RPGs and centrifuge tubes to make nukes), "fertilizer anarchy" (terrorists can use it to make bombs), "internet anarchy" (criminals use it to gain access to databases for identity theft, terrorists can gather information useful for planning attacks), etc?

"Anarchy" is a loaded term that to most people is synonymous with "chaos", with a strong implication of danger and riots in the streets. You're using it to unfairly bias your readers toward seeing your regulatory position as a rational compromise between two evil extremes. That may be your belief - but that doesn't justify your means of avoiding a deeper dialogue.

Why not call it "nano-freedom", for example? Or "nano-personal-choice"? Or more neutrally, "nano-non-interference"? Or since "regulation" is a term that some view positively and others negatively, how about my old suggestion of "nano-laissez-faire"?

Alternatively, you could refer to your regulatory approach as "nano-fascism". That's about as fair as "nano-anarchy", and carries similarly negative connotations for most people. Round it off with "nano-Big-Brotherism" instead of "relinquishment", and the ideas would all be on even footing.



Lenat is probably being a little over-optimistic about the time-frame there. And AGI probably won't be achieved through the CYC approach - leading resercher Ben Goertzel has taken a look at CYC and is pretty sure it won't work.

A collection of propositions does not make for a general intelligence. We can see that now. CYC has already probably got way more initial knowledge than a real AGI would need - 3 million propositions is way too much. Ben Goertzel did say he thought Novamente could be coded in 30 000 lines, so you would have to think that no more than 30 000 initial propositions should be enough for AGI. The fact that CYC has 3 million but has not turned into an AGI probably shows that their approach is wrong. Still, it should be fun to play around with it when it gets hooked up to the net.

I had confidently predicted a Singularity on this blog within 15-35 years. 35 years at the absolute outside, but I think that a Singularity within 15 years is quite possible. Certainly, I do not believe that it will take *me* any longer than 15 years to build an FAI, so that means that someone like Elizer or Ben should be able to do it even faster - if only they got their act together - *sigh* ;)

Should be an even race between AGI and advanced nano.

Mike Treder, CRN

I'm reading an excellent book called "A World Without Time: The Forgotten Legacy of Gödel and Einstein." (http://tinyurl.com/c5jza) I'd read a little about Gödel's Incompleteness Theorem before, but this book explains it quite understandably. One conclusion his theorem leads to, when applied to general relativity, is that time does not actually exist!

But another application of the theorem, relevant to this discussion, is that that a computer can never be as smart as a human being because the extent of its knowledge is limited by a fixed set of axioms, whereas people can discover unexpected truths. Does anyone have, or know of, a credible counter-argument?

Tom Craver

Random changes could extend any set of fixed axioms, a la genetic algorithms.


Sounds alot like a book by Julian Barbour; I think it was called "The End of Time". The timeless logic being: a particle in motion has only a 1/6 chance of retracing its previous movement, so macro-scale time symmetry is unlikely. There's no reason an AI couldn't utilize neural networks to learn, but I don't know if you would consider that a computer or not. I don't think source code alone is enough to generate AGI.


Mike - I have not read the book you mention, but the concept of using Godel's theorem to distinguish human from artificial intelligence has been widely discussed in the literature. Lucas proposed it back in 1961, and more recently Roger Penrose wrote two books on the topic, Shadows of the Mind and The Emperor's New Mind. See http://psyche.cs.monash.edu.au/psyche-index-v2.html for a collection of responses to Penrose's version of the argument.

Chris Phoenix, CRN

1) "Anarchy" has the same connotations for us as it does for you. Those connotations aren't hidden. I think it's more accurate than biased--it means what we intend it to mean. It's a strong word, but it reflects our belief that unbridled MM will be extremely, probably intolerably dangerous.

2) I don't see any basis for the belief that humans can solve problems (such as Godel incompleteness or the Halting Problem) that computers can't. A computer can be programmed to tell whether or not most programs halt. A human can't tell whether some programs halt.

3) Google is about to start asking for random video.
To me, that says they have a way to process it well enough to make it searchable. Think about all the capabilities implied by that...


michael vassar

Google seems like a much more important driver of emergent singularity than Cyc, but either or both may make significant impact on the productivity of ordinary or enhanced humans in the decade before singularity.


>But another application of the theorem, relevant to >this discussion, is that that a computer can never be >as smart as a human being because the extent of its >knowledge is limited by a fixed set of axioms, whereas >people can discover unexpected truths. Does anyone >have, or know of, a credible counter-argument?

Mike, that's just the original version of the argument against AI used by Penrose in his recent books.

It's not correct, because all it shows it that there are maths truths that can't be obtained by computer program with *100% certainty* But so what? We can escape the limitations of the Godel theorem simply by relaxing our demand for certainty. Then we can simply assign probabilities to any maths truth we want: we can obtain any degree of certainty we like: 90%, 95%, 99% 99.9% etc... just not 100%

The counter-argument to the Godel limitation then is simply this: Yes, it's true that computers can't know maths truths with certainty. BUT NEITHER CAN HUMANS! Even in mathematics we cannot be certain that the axiomatic system we are using is consistent. So when ever we 'understand' new and unexpected truths, we do not do so with 100% certainty. There will always be some degree of intuiton, guess-works etc. And there is no reason why a computer could not use guess-work etc as well.

Tom Craver

cdnprodigy: "I don't think source code alone is enough to generate AGI"

I think there are likely to be practical problems (i.e. speed of operation) with using "source code" in a computer to implement an AGI, but in the Turing sense it seems feasible. Have your source code emulate whatever hardware would be a more ideal implementation.

Tom Craver

>"Anarchy" has the same connotations for us as it does for you. Those connotations aren't hidden. I think it's more accurate than biased--it means what we intend it to mean. It's a strong word, but it reflects our belief that unbridled MM will be extremely, probably intolerably dangerous."
Very well Chris - please explain why "nano-anarchy" would be more dangerous than your proposed nano-fascism. (I'm sure you aren't insulted by that unbiased, accurate term for your position, as it reflects my belief that your proposals will require or lead directly to government oppression of everyone, in a vain attempt to suppress the rebellion your regulations will spark and terrorism and crime your nanofactory restrictions will do nothing to stop or even significantly delay.)

And no, you never really have explained what is worse about "nano-anarchy". You've made vague passing references to criminals and terrorists and remote-controlled untraceable murder, as if those things would somehow be reduced under nano-fascism, but never really explained how nano-fascism will actually limit those things.

In fact, in previous posts you've pretty much admitted it won't work in the long run - but you still cling to the hope that in the short term it'll provide some benefits, and maybe that'll get us through to an age when we'll be better able to deal with nano-problems.

But one age creates the pre-conditions for the next: nano-fascism will create the pre-conditions for true anarchy, true chaos, by destroying the written and unwritten social contracts that restrain the vast, VAST majority of us.

We will have a hard enough time avoiding devolution into your nano-fascism after the first instance of nano-terror - we don't need CRN painting it as the only viable solution, in advance.


I think "nano-anarchy" will lead to an arms race eventually unleashing devastating weapons. But "nano-fascism" might not be too high a price to pay for the virtues of nanotechnology, especially if it is the only alternative to nano-anarchy. The loss of privacy for: a greatly increased lifespan, basic material needs for all, my own nano-assembler (with standard safeguards). This is an acceptable tradeoff to me. If some people would feel otherwise, they are welcome to live in "clean" cities, free of any MNT enabling technology, where only exit/entry points are monitored.

Chris Phoenix, CRN

Tom, I do disagree with your statement that we are proposing nano-fascism. There is a difference between proposing an intended consequence, and giving well-meant advice that might have a bad consequence. Note also that Mike wrote, "the result could be nano-anarchy, nano-tyranny, or something even worse." I think nano-tyranny is similar to nano-fascism, and we are warning against it, not proposing it.

We see nano-anarchy and nano-fascism as two opposite extremes. We hope there is a sane and stable range somewhere between the two. We do not think that social codes will be sufficient to keep things stable; all it takes is a few powerful defectors to spoil things for everyone. Look at the Internet, where one script kiddie can cause millions of dollars worth of loss with ten minutes' work, and spammers actually get paid for their vandalism. And if nanofactories are unrestricted, then everyone will be powerful enough to do massive damage of whatever kind they choose. (I don't believe that private uncoordinated defenses will be sufficient to avoid unacceptable tragedy.)

If a weak nano-fascism were established, I agree that it would create the preconditions for nano-anarchy. (A strong nano-fascism would probably be unbreakable by anything other than an internal schism--if you apply enough restrictions and enough monitoring technology and enough (not necessarily trusted) people to watch for problems, you can keep any number of people at a stone age level of technology indefinitely.) But a nano-anarchy would likewise create the preconditions for nano-fascism--out of sheer self-defense.

Again, we are looking for middle-of-the-road solutions. If nanofactories only produced intangible bytes, we would probably be wise to treat them as harmless engines of abundance, and eschew restrictions. If nanofactories only produced weapons, there wouldn't be much harm in restricting them heavily. But they will produce abundance, and weapons, and lots of desirable products. Single-issue idealism will almost certainly produce the wrong answer.


Tom Craver


The problem is that CRN conflates two very distinct concepts under the name "nano-anarchy", and then writes as if they were one and the same - as I am now doing with "nano-fascism".

One idea is "chaos due to MNT", the other is "widespread access to unrestricted MNT". Neither really fits the real definition of "anarchy", but I'll leave that for now. For clarity in my own writing, I'll use "nano-freedom" to refer to the latter, and leave "nano-anarchy" for the former.

You may believe the one leads to the other - but that doesn't explain or excuse your merging of the two, to the benefit your position in the eyes of anyone who doesn't recognize what you're doing.

And once more you point to a potential danger of MNT - "script kiddies" - and imply it would arise under nano-freedom but would somehow be prevented by nano-fascism.

But once again, no explanation of how nano-fascism will be more effective, leaving me to assume that the answer is "repressive laws enforced against everyone" - following the model now applied to airline passengers, who are all treated as terror suspects, but extended to every area of life.

And by the way - there's nothing in the nano-freedom approach that says defensive efforts must be limited to "private un-coordinated" - that's another false conclusion drawn from equating it with "anarchy". Nano-freedom allows the benefits of both coordinated (i.e goverment) and private defensive measures.

michael vassar

I suspect that I see an association in your mind between current airport security and fascism. This causes you to miss the point and see nano-tyrrany as ineffective. In reality, current airline security is a half-assed "middle path" that restricts freedom in a modest way but doesn't produce security. If you want to be *sure* to avoid another 9/11 you simply ground all the planes forever. We can't afford such global overkill solutions today because even our elites depend on some significant societal efficiency. With MNT they wouldn't need such efficiency, and could afford to eliminate options with millions of benign uses and only a few harmful uses.
Historical tyrranies walk a tight-rope between openness that allows rebellion and inefficiency that leads to global irrelevance (see North Korea). A nanotech monopoly would have no such limitations. Resistance would indeed be futile.

Tom Craver


I don't think of nano-fascism so much as "ineffective" as "undesirable". Or if you wish, expand the idea of effectiveness to include "not destroying the things you're trying to protect, in the process".

Chris Phoenix, CRN


I like your terms. Nano-chaos may be more accurate and descriptive than nano-anarchy. And will you please use distinct terms to talk about technical restriction and oppressive government? Perhaps nano-restriction and nano-tyranny? Under nano-tyranny, it's not clear that the public would have access to nanofactories in any form whatsoever. Under nano-restriction, by definition, the public would have access to nanofactories, albeit restricted nanofactories.

So then Mike's statement would be, "If MM is developed before the world is prepared to manage it safely and responsibly, the result could be nano-chaos, nano-tyranny, or something even worse." Would you agree with that?

The relevant meanings of anarchy are 1) complete lack of political authority; 2) political disorder and confusion. What we have been arguing is that a complete lack of technical restrictions on nanofactories will make it very difficult to maintain any political authority over their use. And that a lack of political authority over such a powerful technology will lead to disorder and confusion, political and otherwise. Nano-freedom thus leads to nano-chaos--unless the incipient nano-chaos inspires a nano-tyranny.

I think we can agree that under nano-freedom, script kiddies could exist, just as they exist today under computer-freedom. What we probably disagree on is the damage that script kiddies would do, and the likely response. I think the damage is extreme, especially once their "older siblings" the spammers and phishers (and blackmailers and extortionists and terrorists) get into the act.

I see a gap between nano-restriction and nano-tyranny at least as large as the gap you see between nano-freedom and nano-chaos. I agree that nano-restriction is limited, and won't work for long unsupported by other policies. But it seems obvious to me that nano-tyranny would be able to prevent script kiddies, by sending everyone back to the stone age and/or killing them if necessary.

And it is not at all obvious to me whether nano-restriction or nano-freedom will lead more surely to nano-tyranny. Nano-freedom will be very difficult to control, and could be very dangerous and destructive, and some group could respond to that destruction by simply taking away everyone's nanofactories (after a struggle causing untold destruction). To put it poetically, nano-freedom opens Pandora's box, and if that turns out to be intolerable, the only way to fix it is to put the whole world in a box.

At one point, while writing this, I was going to use Prohibition as an example of anarchy (lack of political authority in the area of alcohol) (leading to chaos). But around the time I was writing, "A nearly complete lack of political authority--just enough remaining to make organized crime profitable," I realized that it may actually be a counterexample. If alcohol had been trivial for individuals to make, there wouldn't have been any rum runners or mobsters. So this may be a case where limited restriction actually created a black market (leading to crime (leading to chaos)) that wouldn't have existed in a complete lack of authority.

So, is there anywhere in the space of policy options that prevents both script kiddies and black marketeers, but does not lead directly to nano-tyranny?

I don't know. There are at least two questions that will be very difficult to answer. 1) How much nano-restriction will be required to prevent script kiddies and other crimes? 2) How much nano-freedom will be required to prevent governments (or emergency militias) from turning tyrannical?

I'm not even sure the second question is well planned. It uses "Second Amendment" logic--a resistant citizenship keeps government in check. But I think Sherman's March to the Sea in the American Civil War showed that that plan is obsolete. In other words, if government has nanofactories and citizens have nanofactories, then government can do whatever it likes regardless of whether the citizens' nanofactories are restricted. So I don't think nano-freedom is any defense against governments becoming nano-tyrannical. And I do think its consequences may inspire nano-tyrany.

No conclusions yet. Looking forward to your response.


Tom Craver

Thank-you, yes, that is much clearer.

"Script kiddies" could mean several things. The most straightforward idea would be to produce a worm or virus that infects a nanofactory to make it produce something undesired by the owner. The term also implies the creation of a tool to make that easy for an unskilled person to do.

It also implies that prior to the existence of the tool, the activity involved substantial skillz, and that those involved were doing it out of ego-gratification, and that eventually someone wrote a tool to show off his superiority and - by making a trivial to use tool that can do the same things as his competitors, show their efforts to be relatively trivial.

I would expect the computer virus approach to be done, if possible - but because it is rather old hat, it may not be considered a challenge very long. So a virus/worm generation tool may be produced very early on as a convenience to nano-crackers - it'd just be a tool in their kit.

I would expect the crackers to go beyond that. I would look to the 'antics' of MIT and CalTech students for a model. They have long done reality hacking - also known as "pranks". The future elite nano-hacker will likely gratify his ego by engaging in this sort of thing, either using his own nanofactory to produce the prank objects, but more likely leveraging many other people's nanofactories in order to have a widespread effect.

However, the nano-hackers, being ego-driven, will also tend to establish an unwritten code that lets them consider themselves "not the real bad guys" - likely meaning their pranks will mostly be "non-harmful" (by their standards). A clever prank may be embarassing, inconvenient and wasteful - but not life-threatening or permanently damaging. (Of course, with material goods devalued, "inconvenient" might mean finding most of the cars in a city EXCEPT emergency vehicles 'melted' one morning.)

The major danger is probably not the crackers, nor the kiddies who want to be like them without all the effort. It'd be criminals and terrorists and armies making use of the methods and tools the crackers develop. The "script kiddie" danger is mainly due to that trend causing an acceleration of tool development that would likely eventually happen anyhow (if only by the military).

Let's differentiate between nanofactories under nano-freedom and nano-restriction.

Both will have attempts to prevent intrusion - e.g. keep them off the internet so human intervention is required to transport a new design into the system; have services that offer tested-safe designs; virus-scanners; "Read-only-design" factories to insure the ability to safely produce key goods (under nano-freedom, this would include making a new not-Read-only nanofactory, if the old one gets corrupted).

A restricted factory would additionally be limited in specific ways - only able to produce designs built in and excluding a non-read-only factory design; or only running designs from an officially approved design source; or monitored to report on (and possibly shut down) a factory if it starts producing something illegal or likely dangerous.

The problem with all of those restrictions is that they pose challenges to the cracker mentality, while creating pent up demand for prohibited products, or simply products the officials haven't gotten around to approving yet. The harder restrictions are to crack, the more effort will be put into cracking them. I'd guess that such restrictions would be bypassed, one way or another, within the first year of a nanofactory's existence.

If it is really tough to crack and cracking one doesn't give access to all, one nanofactory will be cracked, and "freed" nanofactories rapidly distributed. Or someone will figure out a way to widely distribute a physical virus-analog (a device that modifies factories to disable the restrictions). I.e. nano-restriction evolving toward nano-freedom. Unfortunately, that would likely trigger increased repression by the increasingly desperate government.


I think if nano-factories are dispersed that are able to be hacked, the nano-battle will have already been lost. Some form of quantum-encryption in the nano-factory keys should keep them safe from hackers, assuming the key keepers can be trusted.

Chris Phoenix, CRN

Tom, I think we largely agree on the hacker->kiddie->criminal progression and implications.

While we're hashing out terms, what do you think of "unrestricted nano" as a replacement for "nano-freedom"? The latter seems too apple-pie-ish. Note that unrestricted doesn't mean undefended; I have unrestricted access to my house, but I still have to pause to unlock the front door.

Suppose that nanofactory vendors put in "virus-scanning" software with real-time updates? Any time a blueprint was being built, it'd be transmitted to the company for analysis. The system would warn you if you were about to build a known-bad or suspected-bad product. Doesn't sound too bad as long as you trust the company--and people implicitly trust Microsoft, Macromedia, Real Networks, and a host of others today (probably more than they should--did you know the Macromedia EULA says they get to audit you, and if you've installed Flash on a computer that's too portable, you may have to pay for the audit?)

Anyway, what I'm getting at is that the restrictions don't have to come directly from government, and they may be directed at accountability and detection rather than wholesale prevention. That would reduce the impetus to crack them. Even DeCSS took, what, two or three years?

Here's an idea, one of many possibilities: For security and anti-theft (and tracking) purposes, make each nano-built product contain something like an RFID, and if an alien product gets too close to you for too long, an alarm rings. In other words, make physical objects non-transferrable. (Transferring objects is a violation of the single-user EULA, don't'cha know.) If you see an object you like, just get your own nanofactory to build one for you. That would make it hard for a black market ever to get started. But aside from black marketeers, there'd be no commercial incentive to crack it. Untagged (invisible) products would be a huge threat to everyone's privacy, so even people who didn't like the privacy implications of carrying around a bunch of tags would hesitate to crack the tagging.

If we can survive even five years post-nanofactory, any plans we can make today will be meaningless, and we'll have had five years to make much better plans. So if even onerous restrictions might take a year to crack, I'm actually encouraged.


Chris Phoenix, CRN

cdnprodigy, no kind of restrictions in a nanofactory can prevent someone from building their own nanofactory from scratch. Don't depend too much on technology.


The comments to this entry are closed.