• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« No One is Watching | Main | Selfish Sharing of Information »

August 07, 2006

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Michael Anissimov

David Berube is totally clueless. I'm disappointed that Patrick isn't being more serious. Privacy concerns are nothing next to mass-produced missiles. You are on the ball about nanotech in general, but seem to neglect the risk of recursively self-improving AI that values anything aside from us. An AI programmed to maximize profit in the stock market will eventually take apart the planet and reform it into a gigantic bank of stock success indicators. It's far easier than predicting the actions of companies made of real humans.

George Elvin

I wouldn't discount the privacy concerns as sensors and other surveillance devices go nano. For one thing they're much more subtle than mass-produced missles, and for another, they're already here.

KAZ

It's pretty silly to worry about a stock market AI deciding to destroy the planet in order to maximize stock market indicators. This assumes that the AI has absolutely no other way of examining and measuring things, which would actually make it very poor at predicting stock market success in the first place.

The answer, regarding nanotech, is /of course/ there are risks. Indeed, there are tremendous dangers...as there are with every single new technology. But, as is always the case when free people have access to a technology, the benefits outweigh the risks, and will tend to give us the power to protect ourselves from the dangers it creates.

The only exception to that rule is when a coercive force, like government, is the central producer of a new technology, as with nukes or aerospace.

Phillip Huggan

Kaz, pure libertopianism (the western idealogical counterpart to radical Islam as far as human life is valued) ranks very low on the economic efficiency scale. It is above Soviet communism, but not by much. Even if you can ensure your initial capital distribution is perfectly efficient (usually property owners arbitrarily start off with all the toys), capital itself helps attract more capital. Eventually money will pool to the wealthy instead of to good engineers and administrators. There are many other flaws in Libertopia-land, I'll list them one by one if you really care to be deprogrammed.

M.Anissimov meant to say an AGI (can recursively improve learning/intelligence) stock trader is risky, not the already existing AI traders (which are crappy portfolio managers circa 2006).

Michael, if AGI is so risky that is an argument not to build it. You seem to be focused upon a strategy of accelerating its development (under the assumption AGI is inevitable). I see no problem with designing and researching AGI, but to actually flip the switch to turn it on would cause a greater existential risk than would be extinguished unless there is an impending existential threat already present (there is not circa 2006). The existential threat of millions of diamondoid missiles can be assuaged by using MNT to construct diamondoid cities beneath existing cities (It is a project an organization such as the Lifeboat Foundation could design). There are some MNTed product existential risks I don't presently have solutions for, but that doesn't mean solutions won't appear.

I think privacy concerns are important. There is a great deal of danger in enacting an Orwellian world (greatly increases odds of intractible tyranny). If this source of potential control is made unavailable to anyone, it makes the world safer.

Phillip Huggan

A note to my above post: I estimate the present annual odds of an event occuring that irreversibly will trigger a human extinction risk at less than 0.2% (primarily the aftereffects of a massive nuclear exchange). Purely in terms of existential risks, if you have an AGI ready to be turned on now you would have to be more than 99.8% certain it won't go SkyNet. I don't know if that is possible in principle even assuming millions of AGI programmers at the disposal of an AGI project. In the future annual human extinction odds will be higher than 0.2%, but without an objective evaluation as to the safety of a proposed AGI design, there is no point in ever turning it on unless a very visible human extinction threat is present (diamondoid missiles rains are three decades away at the earliest IMO).

I've tried to suggest structural safeguards such as an AGI design that only spits out paper engineering blueprints (Oracle/PAI). I was greeted with the rhetoric that "humans are inherently fallible". Well if the AGI community cannot see that there are degrees of fallibility, and if (Libertarian) humans are designing the goal system of an AGI...

I doubt an AGI can be constructed with any greater assurance of obedient Friendliness than around 80%. There conceivably are existential risk events thst the odds of which could rise above 20%. But no one in the AI/AGI community has ever suggested only turning on their AGI in such an event, or any mention of a plan demonstarting transparency to such a protocol. Much has been written of the prospect of greatly increased human longevity in singularity philosophy and obviously an AGI could be programmed to effect this end. I'm worried some in the AGI community are assessing the danger of an AGI in terms of a tradeoff regarding their own personal longevity gains, rather than considering human extinction threats.

Chris Phoenix, CRN

Michael: If an AI can't distinguish between a planet-sized bank of stock market success indicators and actual success, why will it be able to distinguish between a single fake indicator and actual success? So it would build one fake indicator and look at it forever--essentially, it would wirehead itself.

I've never been convinced that a GAI could simultaneously have the subtlety and depth necessary to efficiently destroy the planet, and the stupidity to do it by accident.

Phillip, Kaz: taking extreme ideological positions will not advance the discussion.

Free market is a great way to allocate scarce resources. It does not care about people, but tends to make people better off anyway. Free market is not the same as capitalism, as demonstrated by the fact that rent-seeking is pro-capitalist but anti-free-market. It can be debated whether "libertopianism" necessarily leads to rent-seeking. It can be debated to what extent security should be provided at the expense of pure free market. (Even a lot of libertarians agree that government has a legitimate function in regulating force and fraud.)

Let's try to have a discussion at a more subtle level...

Chris

The comments to this entry are closed.