• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« Describing nanotech risks | Main | Killed by Goo! »

April 19, 2004

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451db8a69e200d834284aaf53ef

Listed below are links to weblogs that reference Safe Abundance:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Crawford Kilian

Mike Treder writes:
"Handing out free nanofactories to everyone sounds like a great idea. CRN thinks something like that should be arranged. However, the nation, corporation or consortium that spends billions of dollars to develop the first nanofactory might disagree. How will their investment be recouped? If we don't set up an international development program in advance, the initial developer could be in a position to name their own price."

CK:
Who cares about recouping an investment when your nanotechnology produces everything you could possibly want? If someone were daft enough to pay you billions of dollars, what would you need to spend them on?

Mike Treder, CRN

Remember that the first few generations of nanofactories will not be able to produce "everything you could possibly want", because they will be limited to using a few basic elements.

Brett Bellmore

"If someone were daft enough to pay you billions of dollars, what would you need to spend them on?"

An outside supply of power, so that I don't have to pave my estate over with solar cells?

Pure Carbon 12, to get better performance from what I build?

The time of really bright people, spent designing what *I* want, rather than what *they* want? Some of the things I'd like are wildly complex, after all, and sufficiently ideosyncratic that they're never going to show up on the "neat design of the month club". Like figuring out how to revive some of my friends who are vacationing in liquid Nitrogen... THAT could take thousands of man-years of clever thinking.

Being able to locate my estate on a few hundred acres in Hawaii instead of an acre in the middle of a landfill in Alberta?

Convincing the winner of the Miss Universe contest that she really does want to go out on a date with a 50 year old overweight bald guy? ;)

Some people have gotten the crazy notion that, if you have a device that can manufacture anything you've got the design for, scarcity, and thus money, disappear. That's silly, there are plenty of things that are scarce besides manufacturing capacity. In fact, for many products, manufacturing costs are ALREADY the lesser part of the price.

And even if nanotech DID eventually make money obsolete, and give us all genie machines which would create anything we wished for, suggest new wishes, and tuck us into bed at night, (As if!) it's not going to do so immediately. There will be a rather long transition during which having lots and lots of money will be very advantageous. And people being mortal, they want to be well off today and tomorrow, not just eventually. Not the least because they might not be around when "eventually" arrives.

So the inventors ARE going to want their billions, after all, and will get them, probably by releasing only products, not factories, and then just very limited factories. The table-top appliance that can make anything you download the design for is going to be quite some time in arriving, even after it becomes technologically feasible.

micheal vassar

Outside energy. Nope. You don't need much cheap land to make all the energy that Earth can cheaply use. You need to buy heat pollution credits, not energy.
C12 Boy you are greedy. For space travel? Lighter power supplies?
Time. In spades, but the really bright people will be REALLY expensive, especially since they don't need to work in order to eat and live comfortably. (and you can't make a MNT society work without a generous dole or radical suppression)
Mrs. Universe. Nanomedicine means that your girlfriend can look like Mrs Universe, and you like whoever you want, at least eventually. In the short term you don't need to be fat or bald, but probably can't do much about skeletal structure. (I hope you don't intend to impress her with expensive gifts)
Nanomanufacturing + the web eliminates most of the cost of distribution and R&D as well as the cost of manufacturing. Also, capital can substitute for many services, so a very nice life can be very cheap. My big concern is food. Nanotech can help farming, but as a fraction of all labor agriculture will probably go way up. OTOH, globalization could reduce the price of food greatly today. Imagine using telepresence to hire people in India to do your shopping, guiding them when necessary. Shipping would be a problem for some products, but not most.
A lot of the post MNT economy depends on the quality of the VR. If it's good enough, Alberta is fine.
One can always invent new status symbols that are expensive by the standards of the day, but if MNT enables everyone to have the standard of living of a Swede on the dole, much better health, a clean environment, and good VR, many many people will be satisfied. Add super-prozac (soma?) and the standard of living will depend 99.999% on culture and .001% on wealth.

Brett Bellmore

Energy, yup. It's never too cheap to meter, and they'll probably fold the heat polution credits in as a tax...

C12. It's not the weight advantage. Pure C12 diamond lattice has a 50% higher thermal conductivity than mixed isotope diamond.Which has all sorts of implications for the speed of nanocomputer arrays, as well as the efficiency of nanomachinery.

Designer time. At least we agree there. LOL

Ms. Universe. (I don't date married women.) Look, I agree that ultimately, nanotech will give us all but complete control over our physical forms. Key word: "Ultimately". That's a tough problem, which won't be solved for quite some time after we've got the tools to tackle it.

"Ultimately" really is the word my point revolves around. This is a technology which has fabulous potential, but it will take time to reach that potential, and the guys who develop it first are going to want to live well immediately, not twenty years later. So, it doesn't matter if nanotech would allow beggars to live like kings, it's going to start out expensive because the developers will want to live like kings immediately.

Mike Deering

Brett, Michael, Mike, you are all correct IF nanotech doesn't trigger super-human artificial general intelligence (SAGI). IF it does, then all your expert designer engineering time becomes as cheap as atoms in the soil or bits on the web. Are we supposed to just assume that SAGI are not on the same time horizon as nanotech? Foresight recognizes the advances in AI and the real prospect of near term (<20 yrs) AGI or SAGI. Search for AI on their website, it's there. Why do I see no evidence of the effects of AI development in CRN's plans for the future? Is it the incomprehensibility wall erected by Vinge, popularized by Yudkowsky, and propagated by Anissimov? Balderdash! Intelligence is limited by the environment it is applied to. If we both know how to play tic-tac-toe it doesn't matter how much of a genius you are, you won't play it any better. Us and any super intelligence we create are both living in the same physical universe. There are no practical applications of physical law beyond QED. There is no technology beyond nanotechnology, never will be. Artificial programmable atoms suffer from the "big fingers" limitation, or in this case "big hands." Yes, SAGI's will be smarter, faster, and less error prone than us, but not incomprehensible. We need to start including the simultaneous development of SAGI and nanotech in our plans and projections of the future.

Brett Bellmore

On the contrary, it's perfectly rational to plan for a future with nanotechnology, but without superhuman artifical intelligence.

A SAGI would either be benevolent, neutral, or malign.

If it's benevolent, it can solve your problems better than you can.

If it's neutral, ditto, except that you've got to be careful how you pose the problems.

If it's malign, you're probably dead, and thus don't have any problems.

On the other hand, if you don't develop SAGI before you develop nanotech, you've got a whole set of problems which need to be solved by HUMANS, and given how slow we are, we'd best get right to it, just in case.

My personal expectation is that we will probably develop artifical intelligence shortly after nanotechnology, because it will give us the tools to adequately understand the human brain, and to build sufficiently powerful hardware to run a human or higher level intelligence on. Which means that we'd better have some of the solutions for coping with it already on hand.

The benevolent part I'm rather more dubious about, and as I've related, I think we'd be far wiser to concentrate on amplifying our own intelligence, rather than creating what amounts to slaves who are more powerful than ourselves. Or at least create these new intelligences by uploading, thereby assuring that their motivations were our own. (I'd volunteer, I've been looking forward to being uploaded for a good 30 years now.)

Mike Deering

SAGI is not a person. It is a tool, like a calculator for solving general purpose problems. The calculator in front of me solves arithmetic problems, at super-human proficiencies I might add. If I had a computer program that could solve any solvable math problem in any form, that would be cool, and very useful, and super-human in math, but it wouldn't be a person. Now if I had a computer program that could accept any data form, and use any general purpose or special purpose logic or reasoning algorithm and perform all of the computational functions involved in intelligence, and do it at very much greater than human levels of speed, accuracy, complexity, it would be a SAGI, but it wouldn't be a person. I know this is a difficult concept to grasp, especially after all the sci-fi movies and books we have seen that portray SAGI's as people.

So the question of whether it is friendly or not needs to be addressed to the tool user, not the tool.

michael v

Mike D: Either addressed to the user or the tool, Bret's assertion still stands. I don't see why he thinks it will take significantly longer to control our bodies than to upload our minds, since the latter obviates the former, but that is irrelevant. (BTW, I know about C12 conductivity, but I still call it greedy). Anyway Mike, the problem with your proposal is that using such a powerful tool properly is a VERY VERY DIFFICULT thing to do. Essentially, Yudkowski's Coding a Friendly AI is an attempt to describe the nearly unique method of using such a tool which will not result in catastrophic unintended consequences. Friendly AI may not be a person, but even if limited to REALLY advanced MNT it will appear to be a god. There is no point in predicting the world post SAGI because its nature will depend entirely on the actions taken by (because programmed into) the SAGI.

Chris Phoenix, CRN


First let me say: This is an awesome discussion!

I think Brett and Michael V are both right: scarcity will be a state of mind but will still exist in practice. I love Michael's summary: "...the standard of living will depend 99.999% on culture and .001% on wealth." Not much I can add to that!

I'll answer Mike D in some detail. First, AI isn't a big part of CRN's planning because we don't know how to think about it. I'm not ignoring it--I just don't know enough to make recommendations yet.

To date, I haven't seen any argument that made me feel comfortable about our ability to comprehend the decision-making processes of even moderately advanced AI. Even if we program it all ourselves. For example, we could program something that could do pattern recognition in 300-dimensional space with various kinds of dimensions. We could ask it questions about how to control certain variables--and we'd have no clue what other effects its recommendations would have.

It may be true that "If we both know how to play tic-tac-toe it doesn't matter how much of a genius you are, you won't play it any better." But the real world is not tic-tac-toe. Try this comparison: "If we both know how to speak it doesn't matter how much of a politician you are, you won't be any more influential." Clearly untrue. And now that I think of it, the tic-tac-toe assertion isn't true either. People make mistakes. Tic-tac-toe programs don't.

As to the assertion that there are no technologies beyond nanotech: we have already managed to store multiple bits in the electronic state of a single atom. I think this contradicts your assertion.

Chris

Karl Gallagher

As to the assertion that there are no technologies beyond nanotech

Heh. Sudden vision of our grandchildren arguing about the implications of a gadget that shreds atoms into protons and neutrons and then reassembles them into isotopes not found in nature, while a Nobel-winner denounces it as hype and a bright grad student speculates about what you can make by arranging specific combinations of quarks.

Mike Deering
"AI isn't a big part of CRN's planning because we don't know how to think about it."

Thinking about AI and how it applies to your plans is a skill that takes practice to develop.

PRACTICE       PRACTICE       PRACTICE

How would AGI change your nanofac? The user interface would be more flexible than current computers. Instead of typing at it, you would talk to it, using unformatted natural language. If you had a wireless neural interface, which you probably will, you could think at it, either directly or through its wireless connection to the internet. It being smarter than you, should be able to understand anything you are trying to say. The functional core would be hardened against tampering but outside of the core would be the physical interface, which would be malleable on a microscopic level in the manner of Utility Fog. The physical interface would include sensors for visual, auditory, electromagnetic, temperature, pressure, and perhaps others. It would be able to solve hugely complex problems, such as on-the-fly design of common objects. It would learn. It would customize its behavior to your likes and dislikes. It would have an intelligent security system that was constantly informed of the latest security news via its wireless internet connection.

Beyond the nanofac, all the objects you interact with, your house, car, frig, TV, computer, toothbrush, etcetera would be intelligent, wirelessly interconnected, and sharing information. Of course most, if not all, of these objects will become obsolete when replaced by new functionalities. Your interaction with any of these objects would be like talking to the same unified personality, customized to your preference.

Your point about nanotech being superceded by sub atomic technologies, has a problem. All possible sub atomic technologies will require such amounts of overhead to systematize the will result in negative efficiencies compared to simpler nanotech solutions to the same functionalities. It's not worth the trouble.


Brett Bellmore

Him, an intelligent, malleable physical interface which will customize it's behavior to my likes and dislikes? I suppose I could direct it to look like Barbara Eden... I can just imagine how Ashcroft's successor would react to THAT possibility!

Let's imagine an alternative embodiment: You're going to need some kind of integerated nanotech immune system for your body, to protect you against attacks, as well as to maintain your health. And the SAGI has to have access to your thought patterns anyway, to resolve ambiguities in language, and deliver what you genuinely want, from among the vast range of possibilities you can't think up on your own, and which spoken language would not permit to be related in a reasonable time.

So the SAGI gets installed into your body, interfaced with your brain, and instead of (just?) providing you with what you ask for, it acts as an extension of your intellect, a kind of super-frontal lobe. Not only is it then capable of solving problems for you, your motivations are directly it's own, and instead of having a superhumanly intelligent slave, you yourself become superhumanly intelligent. And what it accomplishes are no longer the work of what's in effect a slave, but your own actions, the details worked out by a new extension of your brain.

With the SAGI a separate entity, you're limited to asking for the satisfaction of those desires you can conceive of. With it as part of you, your ability to conceive of things is itself expanded to be proportionate to the SAGI's capabilities.

That's the way I'd prefer to see it done. And note the advantage: Those who chose to become post-human through this form of augmentation have the behavioral constraints against nanotech abuse directly built into them, on an instinctive level. And thus not only are constrained themselves, but constrained to pass on that constraint to any others they might uplift to their own level.

Suddenly, you don't need massive survailance and control to maintain a safe, civil society, because all the most powerful members of that society are inherently well behaved. You don't need to follow them when they go off into the reaches of space, for fear they'll come back as a conquering force. They're safe.

Janessa Ravenwood

Brett - sounds like you've read "Einstein's Bridge" by John Cramer. That's nearly exactly what he described - a wholly body-integrated nanofac with built-in contraints against abuse. (He's a particle physicist, not a computer scientist - I found several ways around his "constraints" in about 10 seconds, but still, the idea is there.)

I'm in the office today - working 'till 9pm every weeknight and all day Saturday SUCKS...which is why I haven't been here much. I'm just too brain-fried as the sole IT person at my office. Bleh...

Chris Phoenix, CRN


Janessa, I'm surprised you're not screaming in horror at that idea. A product, built by someone else, far too complex for you to analyze... give that product direct access to your brain? You don't think that some government somewhere won't have put in some back doors?

Chris

Brett Bellmore

Right. A government is going to create something THAT powerful, and be content with just releasing it with backdoors built in. Rather than, say, taking over the world.

The real concern in such a situation would be that the anti-abuse constraints designed into it would, probably openly, include front doors. Such as requiring absolute obedience to the law.

You'd have to have a lot of trust for whoever did the work on it, but the flip side is that if such a thing were available and being used, you'd either adopt it yourself, or live out your days in a nature preserve. The singularity, if it happens, isn't going to be kind to late adopters.

Janessa Ravenwood

That's just it, I WOULDN'T put it in UNTIL I had the necessary skills to analyze it. I would love to leave the computer industry and work in the nano industry. UNTIL I can acquire the necessary skills to understand such a thing, I would not install it.

But it's still a good idea if you can get one that's not booby-trapped and back-doored.

Brett: Absolute obedience to the law would instantly drive anyone schizophrenic - we have too many overlapping, outdated, and crazy laws on the books. Any nation that did that would probably wind up killing it's population in short order - the nasty side-effects would tear the place apart.

Brett Bellmore

Oh, I agree, there are too many contradictory laws to allow for aboslute obedience to the [i]law[/i]. So they'd probably compromise at absolute obedience to [i]lawyers[/i]. :O

Philip Glenn de Catalina

It's the BORGS in star trek
without the hunger to assimilate everything(i'm not sure about this part).
That's what will happen to us with molecular nanotechnology

Also for technology beyond nanotechnology. Think about this, a nuclear fusion facility the size of a wristwatch or smaller with air as the fuel. It will give you unlimited amount of energy anytime anywhere. And accesing and manipulation of the "Strong force" (strongest of all the forces) instead of just using electromagnetic force is worth the trouble.

It's simple molecular nanotechnology will give you abundance of material (by using smaller amount of it to make things work). And the technology beyond it will give you abundance of energy.

You also need to think about the coming gift culture the social effect of molecular nanotechnology. It's the direct result of abundance. the abundance comes from molecular substitution, and copying designs.

The comments to this entry are closed.