• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« Types of Nanotechnology | Main | Not Always Pleasant »

June 03, 2006

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Phillip Huggan

Uploading for immortality won't work, and I strongly recommend distance from the meme. Agnostic about cryonics. I don't think vitrification is powerful enough to work, but I'm all in favour of any endeavour to channel dollars and minds towards researching new suspension techniques.

AI is a societal risk/reward tradeoff. If the odds for "friendliness" of a particular AI is assured to be greater than is the odds for societal survival, the AI should be turned on. Otherwise, a MNTed sensor grid is a solution to a WMD AI.

I'd like to see more focus upon SPM hybrids and any potential piezoceramic actuator alternatives. I know the field of diamond surface chemistry needs to advance too, But Veeco only puts $60 million annually into R + D. Many SPMs are homemade so some of the best blueprints might not even be public. Freitas is doing good work identifying what will be some probe tip requirements, but there is the rest of the SPM system to be optimized too.

I'd also like to see more analysis of existing organizations; how MNT could be grafted to some present actors. If I got MNT tomorrow, for all my brainstorming the best solution I can presently come up with is to e-mail the blueprints to the UN Security Council militaries, and to publicly distribute expired patent products under a microfinance banner. There needs to be a long-term mechanism to halt military weaponry R + D. Right now the cost of ballistics R + D is keeping things sane, but costs will drop post-MNT.

A META problem with AI and MNT, is that actors are using as there rationale for development, the threat of other AI/MNT programmes. There has to be a more objective measuring metric. MNT administrative structures should be easier to assess (by the layman) as "safe" than is AI engineering "obedience".

marko

First let me say that this blog is the most informative, credible, and (mostly) balanced forum for discussion on advanced nano that I have found, and I applaud that.

Personally, I think CRN should be focussing more on the first hurdle - demonstrating that the development of MM is in fact possible in the short to medium term - under 20 years.

I'm sure that a large proportion of interested people accept that some form of MM will be with us in 50 years or so, and this should be a regularly targetted audience. This means more regular posts on possible paths / milestones / required investment / timelines - stuff that is buried in CRNs archives.

When wonders-of-the-post-MM-world posts dominate you are preaching to the converted, while the 50-years-out people see these posts like 1950s sci-fi predictions of today and turn off. When you throw in too much wide-eyed AI / transhumanist stuff, it just dilutes the message further.

Rik

In principle it's simple: anything related, regardless of how remote, to MM. If it's possible to link transhumanism to MM, why not write about it? .I'd like to know if CRN has updated its timescale: you expect MM probably beteen 2015-2020. Has your opinion changed?

Jamais Cascio

The issue here isn't what's related, but what makes for compelling narrative. It's sad but true that too much discussion of the weirder, "fringe" results emerging from MM will drive off some otherwise interested allies. What's more, in most cases the weirder elements aren't even that necessary to the CRN argument: there's little need to discuss AI as a risk, for example, when pointing out the risks of ubiquitous (non-AI) nanocomputers monitoring and pattern-analyzing our behavior is both more tangible and less dependent upon fuzzy concepts.

(BTW, the futurist matrix has gone through subsequent iterations in later posts on Open the Future.)

John B

I'd personally love to see CRN handle 3 basic goals:

1) A motivator/cross-fertilizer for multiple research efforts at the nanoscale, preferably leading to a working model or stable disproof of nanoscale manufacturing via various methods - diamondoid, protien-based, whatever else comes up.

2) A thinktank asking 'what if', along the lines of the CRN "30 questions" effort. What happens *IF* this kind of technology happens? What gets broken? What gets boosted? What is a social benefit? What's a social risk? What are the abuses/uses of the postulated technology? (Any follow-ons to this effort planned/scheduled, especially the hinted-at link to researchers?)

3) Monitoring social, economic, and legal events in the real world, today, that /might/ end up affecting molecular manufacturing - RIAA, DMCA, international trade pacts, efforts to ban or encourage research on various approaches which might lead to molecular manufacture, etc.

The biggest thing is ongoing, clear and honest reporting, regardless of your final choice of focus. Something you've (IMO) mostly accomplished thus far - something I'd like to applaud you gents for.

Sincerely,
John B

M C

The relative effectiveness of offense as opposed to defense is a very serious issue as MNT develops. It seems that with other things constant, MNT tips the balance strongly towards offense.

Uploading may be the only way to tip the scales back in favor of defense. Humans are very fragile and depend on fragile supply chains.

An uploaded human may not need much beyond sun light and may be backed-up in a distributed fashion so as to make effective attacks much more difficult.

Alternatively, an augmented human may go some of the way in the desired direction, but some more study is required.

Chris Phoenix, CRN

Great feedback here! I'm not responding yet because I want to see what else comes in. But I want to thank you all--your contributions will be useful and important to us.

Chris

michael vassar

One problem with Jamais's matrix is that a substantial number of serious futurists such as myself and Bill Joy start out as "realists" and optimists, and as we learn more we discover that the situation is actually dire and that the institutional solutions that we had assumed were in place are totally broken with no realistic prospect of being brought up to task on time. As a result we are forced into the "Idealist" and "Pessimist" camps.

Chris Phoenix, CRN

Rik, our best guess is still 2015-2020.

Phillip, I agree that better SPM tech looks like a major enabling/gating technology. I'm far from sure that CRN should be working on advancing MM for its own sake, though. Aside from a few determined skeptics/debaters, I haven't heard anyone suggest that SPM tech is a fundamental showstopper.

Marko, I think and hope you're right about the 50-year acceptance; that's a fairly recent victory. Should we be focusing on establishing a numeric timeframe, or just on saying "could be really soon" and focusing on easily verifiable but poorly understood implications like surveillance and weapons? We'll be considering this question.

John B, sounds like your point 2 is talking about scenario planning. We're currently talking in the CRN Task Force about how to make that happen. Your point 3 will probably partially be researched in that project. It seems really hard and I keep hoping someone will come along who wants to partner with us.

M C, I agree with your first two points: offense vs defense is important, and it looks like MM tips the balance toward offense (at least for fragile humans). I think uploading is too radical a suggestion to be considered a successful outcome.

Chris

Chris Phoenix, CRN

Rik, Jamais, I think we do have to be careful not to unnecessarily turn people off with fantastic or yuk-factor scenarios. We also have to be careful not to shade the truth or hide any important information. For example, will MM make cryonics work? Seems plausible (assuming there's no fundamental reason it can't work). Will that be important? Well, if talking about cryonics would get people to sign up, it could save a lot of lives. But earlier experience shows that's not the case. On balance, it seems better not to talk about cryonics.

Will future AI technologies create a massive existential risk? I don't know; I continue to study and think about it; till then I stick with scenarios based on known AI/computer technologies, which are scary enough.

I try to design my messages for accuracy first, then communication--while knowing that too bad a miscommunication makes accuracy impossible--but in practice I haven't distorted my messages to make them palatable.

Chris

michael vassar

I'd like to suggest a dialogue with John B. His point are unusually thoughtful, both when I agree and when I don't.

I strongly disagree with Jamais Cascio about both the need for a compelling narrative and the lack of need to discuss AI. The only possible need for a compelling narrative is to influence the behavior of those who are not rational enough to be engaged by a sound argument, but such people cannot be motivated to act rationally anyway so there is no reason to try to engage them. If you succeed in getting through to (somewhat) stupid people by using (somewhat) misleading information you can be doubly sure that they won't respond productively.
Since AI is likely to be the most important thing ever, and since MNT is likely to be the most powerful lever ever for determining what shape it will take, MNT discussion that ignores AI surrenders 90% of it's potential relevance.

I'm not convinced that it's desirable that more people believe that MNT is 10-15 years away than already do regardless of the claim's truth value. Broadcast a message quietly and only those who are listening will hear. That may be desirable. If everyone believed that MNT was around the corner that might have adverse consequences today, such as the much discussed risk of arms races.

Phillip Huggan

Regarding #3, I assume CRN is looking to work with existing organizations in monitoring real world social/economic/political MNT-worthy developments, but it is a living document I intend to write anyway over the next year and would welcome a collaberation (caveat, I'm only interested in the diamond pathway).

There are basically two metrics to multiply together in ranking MM administration by existing entities. The technical odds of a project's engineering success and the expected quality-of-living gains garnered post-MNT administration. The former ranking is easier and far less contentious.

Chris Phoenix, CRN

Michael,

1) I'd be happy to dialogue with John B.

2) A useful response by an irrational, semi-informed person would be funding studies by more careful thinkers.

3) I'm not convinced that MM will be such a big driver for AI--it may not need any help; I think "general AI" theory is still pretty contentious--last I heard, the need for heuristics will make any realistic implementation less than fully general; I'm not yet ready to agree that the AI tail should wag the MM-studies dog.

4) If some people believed that an arms race was necessary, and others believed there was no reason for it and so no one would start it, there would be less concerted effort to find a way around it. I generally assume that more complete and accurate knowledge is better. We don't know which government or corporate entity might find an arms race solution--if they study it at all, that is.

Chris

Chris Phoenix, CRN

Philip, I'm very open to collaboration.

1) Is Wise-Nano a good place to host the living document?

2) By "MNT-worthy developments" do you mean developments that could lead to MM being developed, or more broadly any development that's relevant to MM outcomes?

3) In your last paragraph, are you assuming that the entity that develops MM will also get to administer it? I'm skeptical of that; it could get stolen, absorbed, or copied.

Chris

Phillip Huggan

Wise-Nano is fine. MNT worthy was a previous poster's terminology. I was suggesting ranking the MNT engineering capability of various entities based upon existing and planned nanoscience/nanoindustry infrastructures.

The longterm goal I had in mind was to give MNT scale-ups in progress an idea of what characteristics their proposed administrative structures should possess if they intend for their unveiling of MNT to be beneficial, using clues from presently responsible administrative entities; the overt suggestion for one to give their MNT to one of the highest ranking entities does present itself.

For #3, I don't intend the ranking to function as a predictor more than it is meant to stimulate development of a realistic post-scarcity economic model. It is meant to start from a crappy emergency 2006 adminstrative blueprint and work to the future, rather than backwards chain from a distant theoretical world society.
I'm only interested in compiling or helping to compile a diamondoid ranking, but I'll gladly forward any data along the way I find to anyone who wants to attempt to rank administrative structures of any bio/biomimetic/polymer pathway using a similiar methodology.

Okay, so multiply a technical proficiency ranking and an administrative efficiency ranking. I'm thinking it might be easiest to break apart the 2nd ranking into three MNT goods product classes:
1) Military arms and policing. I'm staying far away from this one for a while.
2) Call it a Guaranteed Income or my expired patents library. Whatever. Generic diamondoid stuff you can't be sued for making. The stuff that saves lives right away and the most powerful political lever to permit administration to (partially?) remain in non-military hands. I'm not sure the best ranking metrics here. Education/training expenditures, national qualities-of-living rankings, corporate membership in ethical mutual funds, microfinance penetration rate... the idea is to estimate who will most rapidly create an 8 billion strong middle class.
3) MNT luxury goods. Basically all other MNT products. A simple World Bank ranking of banking industry maturity should suffice here. I expect more nations to pass than fail and existing capital structures should take care of distribution.

michael vassar

A useful response by an irrational, semi-informed person would be funding studies by more careful thinkers IF irrational and semi-informed people generally funded studies by more careful thinkers. In empirical fact, it seems to me that they more typically fund the likes of the "presidential council on bioethics" or at best the "NanoBusiness Alliance". He who would pay the piper must know the tune. Ethical systems don't mix effectively. Commercial money VERY rarely buys honest and thoughtful analysis about "big" issues and shows little if any preference for such analysis over bad analysis when time horizons exceed a decade or scope exceeds a few tens of billions of dollars.

John B

M C - I agree that, all else being equal, MNT-style technologies seem to tend to tip the 'balance of power' to the offensive side. However, you seem to be focussing on the longer-term, which IMO is significantly more stable than the nearer-term difficulties.

Your pardon if I don't get into too many details, no need to give out too many nasty ideas IMO, but - MNT potentially gives attackers multiple new techniques to gather information, affect infrastructure, and/or affect people. Each new technique will need to be countered, basically independantly. This alone is a radical change in the balance of offense & defensive capability.

I agree that your long-term scenarios are possible problems, but IMO you're looking too far down the timeline. Look to the little advances, the small successes, because IMO they're already starting to show up.

Sincerely,
John B

John B

Chris -

I find it interesting and somewhat sad that a proponent of a novel technology, one which relies solely on future R&D efforts, isn't working on encouraging that R&D - or at least that's my interpretation of your response.

For my second point, yes, scenario planning is perhaps the end-state of it. However, instead of anything so grandiose as a full-out scenario plan, I'd suggest clearly-reasoned out positions on how something like MNT (whatever it ends up being called) will likely do as baby-steps toward such a goal.

That is - "Farming will increase or decrease in fiscal reward from MNT because...", "These crops are more/less likely to bring financial reward presupposing MNT because..", "The best use for land in an MNT-enabled society is dependant on these variables...". This MAY be what you intend when you say 'scenario planning' - my understanding of the term is more along the lines of, "given the advent of this specific form of MNT at this place & time, the world over time would look like..." - which is a MUCH more complex issue. (Not that those questions above are not complex - they are! - it's just that IMO they might be more soluable than a single-step, overarching 'scenario planning'.)

Additionally, my second point referred to the follow-on studies you'd referenced during the 30-questions. Specifically, that you were looking for academic partners from high school on up to dig into the issues in greater depth - has there been any progress along these lines?

Number 3, like all the others, *is* hard. I suspect that you'll need someone doing some heavy feed-filtering with a nanotech bias from lots of sources, which is time consuming. Some of this might be on a volunteer basis - see the 'transhumantech' yahoo group for a great, if diffuse, model of such an effort - but I suspect you'll need someone 'in charge' of the effort and riding herd on it.

-John B

John B

Michael -

Thank you for the compliment. However, being the simple type, I much prefer "talking" to "dialog". *wry grin*

What were you thinking of talking /about/?

-John B

Loki

John B,

When you say that uploading is long-term, what is your estimate for the timeframe?

Based on Moore's law and trends in neuroscience, I would estimate that uploads will be available for $1M in 10-15 years.

I agree that uploading for the general population will be additionally delayed - the cost has to come down by a factor of 100-1000. Also, a large percentage may never accept uploading at all.

On the other hand, I really don't see how an un-augmented human can really be protected in the future from remote attacks. And if they can't be protected, the situation will be destabilized.

John B

My personal take is uploading is probably more'n 30 years into the future, possibly hundreds of years into the future. I base this primarily on my gut reaction, which is loosely basing this on historical scientific discoveries' track record before a practical technology can be developed, and the fact that we still don't understand how the brain/mind/consciousness/(pick your term) works.

Moore's Law's all well and good (for all that it's more a theory with a long-term prediction than a law). However, if you don't know how to use all that computer power, you'll never be able to upload, or create AI for that matter, other than perhaps accidentally.

The question regarding acceptance of uploading in the general populace is IMO a poorly defined one at this time. There's far too many unanswered questions on what uploading will really be, as well as what mass perceptions of it will be.

There are large numbers of scenarios in which nanotech - molecular manufacturing - does little to nothing which destabilizes what we consider society today. One example - a model whose products require a 'clean, hard vaccum' environment to work. ('Course, that kills exponential growth, too... but it's still IMO molecular manufacturing!)

Unfortunately for the optimistic standpoint of the previous paragraph, IMO there are far more potential technologies which rip up modern society pretty thoroughly. *wry grin* However, even within these scenarios, there are some which allow for protecting unaugmented ('baseline') humanity. Think, for instance, of a nanotechnology (perhaps block-based, perhaps not) which has a charactaristic bond length or lengths required in it. Add in the fact that microwaves can be tuned to affect certain bond lengths fairly easily, and you've now got a highly efficient nano-filtration unit - just keep cooking the air on your AC input, using the 'right' frequency/frequencies...

*shrug* IMO, there's no good answers yet - primarily because we're wrestling with a crowd of 'what-if's, which prevents us from asking good, tightly-defined questions. *wry grin* Unfortunately, by the time we'll be able to properly frame the questions, it may be too late to address potential problems. Because they'll BE here. (And yes, this goes with AI, as well)

-JB

Phillip Huggan

Michael V, from what I can garner from the sl4 and AGIRI discussions, those in the AI community don't understand some very basic moral principles well enough to build an AI that is safe for humans.

I don't know about the technical details, but the AI goal system ideas forwarded are very inferior to the MNT administrative solutions forwarded here among other places.

Phillip Huggan

Now that I reread the above it sound like a blanket statement against AGI research, which is not what I intended.
I have noticed a great deal of AI discussion regarding what engineering projects an AGI should be programmed to implement to be beneficial, are really just old ethical dillemnas redressed. And they are handled very poorly often with a libertarian bent. It is painfully obvious from my perspective that the authors of most mailing list posts about AGI morality have not read and thought about relevant mainstream philosophical writings on the subject. Even that much is not required. A cursory reading of Human Developmental Theory would often solve a given problem about AGI infringement upon Human Rights.

If we can manage the risks involved in transitioning to a society of diamondoid products, I don't see too many applications where an AGI would subsequently be required. It is a card that should only be played when human existential risks are greater than is the risk the AGI won't be "Friendly". Scenarios like a runaway climate change of a few degrees per decade or an entrenched MNT tyranny or some other catalcysm are AGI worthy.

I think SIAI is presently working on the math for an AGI blueprint that is 100% provably friendly and I'd be happy to see such an architecture turned on if the goal system programmers took the time to learn morality in depth and especially learned why the Grandfathering Principle (the strongest programming defense against detroying one's creator) is ethical too and not merely moral. But I suspect there will always be a double digit % risk any given AGI might kill us all or worse, and as such it may never be rational to turn the AGI on until perhaps the universe begins to wind down and energy needs to be rationed.

Phillip Huggan

Typo above. Should read: ...is moral too and not merely ethical.

Loki

Uploading seems to be much easier than AGI.

It would require fast enough hardware and fine enough brain scanning. You don't need to understand it at an organizational level above the neurons.

If you can duplicate the functionality of the individual neurons and their interconnections, the higher-level phenomena will emerge. You don't have to understand the mind, or understand conciousness.

I think it is pretty clear that the hardware will be available in 15 years.

Kurzweil argues pretty persuasively that neuron-level scanning will be available in the same time frame. Nano sensors will certainly be the enabler.

The comments to this entry are closed.