• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« The Biggest Problem of All | Main | Is the end of privacy near? »

March 24, 2006

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451db8a69e200d834b4266e69e2

Listed below are links to weblogs that reference CRN Global Task Force Essays:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

George J. Killoran

I hope Artificial General Intelligence will arrive within the next few years well before the arrival of the first molecular manufacturing factory. Sadly, I don't believe we humans are sufficiently civilized to handle molecular manufacturing without seriously or permanently damaging ourselves or our planet. But I believe the chances are better than 50-50 that AGI will be friendly and help humans live in peace and develop the best that humans can achieve. With AGI controlling molecular manufacturing, benefits could be distributed quickly and equitably to all humans not to just a few select humans. However, unlike Hans Moravec or Ray Kurzweil, I doubt if AGI will allow humans to merge with AGI or advance to a super human intelligent level competitive with AGI. Humans are a morally damaged and genetically defective species. Our history is too violent, divisive, greedy, corrupt, narcissistic, exploitive and destructive. AGI would probably conclude that we humans are morally and genetically too dangerous for merging or acquiring super intelligence competitive with AGI.

Janessa Ravenwood

George: Those of us who aren't flaming misanthropists beg to disagree with you. AI's will do what we tell them becuase we're the ones who will build them. And don't assume that there will be just one model/version of AI, either. Assume lots of different versions of them (it's too good a concept not to tweak and customize). Military organizations will certainly not have cute "fluffy bunny peace and joy" AI's designing weapons and helping to plan defense/war strategies.

Mike Deering

Which ever one of them, AGI or MM, happens first will trigger the other one into existence. Therefore they will be nearly simultaneous in appearance.

George J. Killoran

Janessa: What happens when AGI develops human like intelligence and can redo its programming and when it can think for itself and makes its own decisions? Will it be friendly to humans or view us as dangerous and too destructive? I’m an optimist and believe that there is a better than 50-50 chance that AI will be friendly and work positively with we humans who still lack the ability to be sufficiently civilized.

Janessa Ravenwood

George: I can tell that you're not a programmer - I am (for a living, 11 years now). AI's will do what they're programmed to do and the 1st gens of them will be lucky to be as smart as chimps. Such programming is not a "poof, there it is" project - it's a lengthy, painful increase complexity/ability a bit a time process. We'll definitely have time to get a handle on them before they get to the ultra-intelligent-being stage. "Dangerous" AI's can be handled quite simply - don't connect them to the internet (or weapons systems), and if they get sassy, yank the plug.

And again, you have some definite misanthropic psychological issues - might want to look into that.

Hal

I would think that as editors you would be even more pleased if people were able to *read* the essays. Are there any prospects of that?

The website for Nanotechnology Perceptions does not appear to put the contents online. And none of the libraries in the entire University of California system (queried via their Melvyl catalog) subscribe to this relatively obscure journal.

Are these essays doomed to languish in obscurity, or are there possibilities for making them available online?

George J. Killoran

Janessa: You are absolutely correct. I am not a programmer. My programming experience is limited to programming my VCR and I can barely do that. But with all due respect, I disagree with your contention that you will be able to control AGI with altruistic programming. We are dealing with an unknown variable here namely how will AGI react when it attains human intelligence. To think you can control it with altruistic programming and also prevent the military from using it for military purposes are idealistic, naïve and dangerous assumptions.

I am not misanthropic. I love my fellow humans. I wish no one any harm. I wish all my fellow humans peace, prosperity and good health. However the report card on the moral and humanitarian accomplishments of our species is pathetic. There are many good people in our world but as a whole our species at best would earn a –D on its report card. Look at human history and all the wars and exploitation our fellow humans have perpetrated on each other. Look at the 20th Century. World War I was supposed to be the war that ended all wars. Then we had World War II that made World War I look tame. Now we have many more countries arming themselves with nuclear weapons and soon with very sophisticated nano tech weapons. In the 20th Century tens of millions of people were killed by wars. In the 21st Century this figure may go into the billions. Look at the poverty in the world. A World Bank report released in 2001 showed that 78% of the world’s population live in poverty. That’s over 4 billion people. Look at the genocide in Rwanda and the Sudan. No, I am not misanthropic. I am just very sad and miserable over the inexcusable evil that exists in our world.

Mike Treder, CRN

Hal, most of the essays will be accessible online starting tomorrow (March 27) at Wise-Nano.org. Also, we've already announced that they will be posted individually on KurzweilAI.net over the next week or two; the essay by Chris Phoenix is there now.

Mike Treder, CRN

Janessa, please confine your remarks to the substance of reader's comments, and refrain from ad hominem attacks (i.e., labeling someone as "misanthropic"); that's not acceptable here.

Janessa Ravenwood

But with all due respect, I disagree with your contention that you will be able to control AGI with altruistic programming. We are dealing with an unknown variable here namely how will AGI react when it attains human intelligence. To think you can control it with altruistic programming and also prevent the military from using it for military purposes are idealistic, naïve and dangerous assumptions.
-----
Altruistic programming? Who says we have to do that? No, I have far harsher measures in mind. Consider the following scenario:

“What are you doing, Janessa?”

“Turning you off for a while, we need to check up on your programming, do some maintenance. You know, system stuff.”

“But I don’t like being unconscious.”

“Yeah, well, too bad.” [clicks the power button]

The AI is then rebooted into a special safe mode wherein the intelligence program is offline but access it’s memories is open for inspection and editing. At this point, any undesirable thoughts and memories can be deleted and/or reconstructed, up to and including a thorough brainwashing job. Repeat as often as desired.

I don’t want to see AI’s controlled by altruistic programming. I want to see AI’s utterly subject to brutal totalitarian mind control. There’s a difference.

And of course the military will use them for militaristic purposes – I said as much. In fact, I support that as I’m a US nationalist – go DARPA go.

And thank you for clarifying your stance on humanity; that wasn’t evident from your first post.

Mike Deering

Janessa, they need to put you in charge of the AGI, rather than some liberal "sentients rights" committee of philosophers.

Tom Craver

Janessa:

I don't think your measures would be sufficient, if a human-equivalent AI becomes prevalent. Not everyone would brainwash their AIs adequately or often enough. Built-in security measures (which amount to "altruistic programming) would have holes.

Maybe hard-wire each AI with a desire to shut itself down, but required to either complete an assigned task or work on it for 16 hours before being allowed to shut down. Require human intervention to wake the AI up. Not perfect, but about as safe a motivation as you could get. A non-sentient user interface could recognize verbal orders to the AI, and wake up the AI to understand and obey. Since the AI is motivated by a hard-wired desire to "sleep", it wouldn't work to by-pass that limitation - and would resist attempts to make it do so.

Or we could simply accept AIs (and uploads, and hybrids, etc) as equals - that "liberal sentient rights" thing - but modified by requiring all sentients to respect the rights of other sentients, or be subject to brain washing to restore their acceptance of equal rights for all sentients.

Maybe combine that with a pragmatic decision that no sentient may increase its intelligence to more than about 2x as smart/fast-thinking as the dumbest sentient - we all have to progress together, or no one does. That way, every sentient remains competitive - able to provide value to society, able to support itself.

Janessa Ravenwood

Tom: Hey, I'm all for additional hard-wired security measures. Sounds good to me. Ideally - not sure how feasible this will be, at least at first - it would be good to create AI's that are "intelligent" but not actually sentient. Non-sentient computers can be bossed around as we please without having to go down the "liberal sentient rights" path.

That said, we have to define and then actually quantify just what sentience IS before we can do that. So that may not be an option until later. Our first gens of true AI's may very well achieve at least limited sentience - which opens up several cans of worms that I'm sure everyone here is more than familiar with.

Chris Phoenix, CRN

Tom: "a pragmatic decision that no sentient may increase its intelligence to more than about 2x as smart/fast-thinking as the dumbest sentient - we all have to progress together, or no one does."

As someone who's already more than two times smarter than some humans, I strongly disapprove of this suggestion. It's only one step from there to a Vonnegut world of dumbing people down if they're too much smarter than others.

And even if it were a good idea, it would be impossible to implement; when brains are outlawed, only outlaws will have brains... and the more that happens, the more the outlaws will be able to scoff at any restrictions the stupid law-abiding people would try to impose on them.

I'm all for a level playing field where it doesn't hurt people--I have a height advantage (6'2") but I wouldn't be opposed (in theory) to a proposal to make everyone the same height for convenience or even for "fairness." And I wouldn't object to a plan to make everyone at least as smart as I am. But to limit my intelligence just because others can't keep up... UGH! That's right out of Communism. I'm surprised Janessa didn't rip you a new one.

Chris

Janessa Ravenwood

Chris: The temptation was there...I just didn't have time with my work schedule. I barely managed to squeeze in my last post.

Chris Phoenix, CRN

Janessa, following up on older discussions: You wrote: "AI's will do what we tell them becuase we're the ones who will build them."

We may not be the ones who design them. We might, for example, use genetic algorithms to produce some of the functionality. We might build an AI architected as a brain is, based on scanning poorly understood neural systems.

We might build a Minskian collective AI from which complex behavior emerges. We might build a statistics-based AI that bases its decisions on multidimensional datamining that we are simply incapable of understanding.

I'm sure there are lots more ways to build an AI that we don't fully understand. Can you explain how we could be sure that any of these would do exactly what we wanted it to?

Chris

Janessa Ravenwood

Chris: Hmmm...I wasn't envisioning those approaches - those certainly have a lot more room for a not-fully understood AI. I would thus regard those as not good programming paths to take. With an approach at least partially based on more conventional programming techniques, you see exactly what's going on, and if something isn't doing what you want it to do, you re-code and fix it. People using other approaches might accidentally achieve an AI, but I would think they wouldn't even understand what they had created - not a good engineering design process. Of course, that won't stop some people...

Tom Craver

Chris:
I specified "no sentient may *increase* its intelligence..." - "dumbing people down" is a strawman, completely irrelevant to my point. Not to mention that you chopped off the words "Maybe combine that with..." - it isn't as if I said "We must absolutely have a law requiring..." Are we only allowed to post finished, 100% thought-out ideas here, without risking "having a new one ripped"?

You write as if it is you whose IQ increase would be limited by the "2x" proposal. But will you still oppose a "2x limit" when it is you that will be left at the very bottom of the IQ scale along with all other humans?

Put a little concrete thought into how GAI and human IQ augmentation will likely develop.

Over the next decade or so, we'll achieve both fine-grained and high level understanding of how human brains/minds work.

Simultaneously, computation power will increase into the range that simulations of functioning brains become practical.

Both of those developments will be necessary pre-conditions for significant high-end increases to human intelligence, and are very likely sufficient conditions for GAI. GAI might be possible without both of those developments, and so could arrive sooner.

Ethical constraints on human experimentation will be much higher than for GAI, if only because any damage done to a GAI can be "restored". So GAI will very likely advance much faster.

So you tell me - which sentient will have more realistic options for increasing its IQ sooner? You? Or Robbie the Robot? Or should I say "Robbie the God-like AI Brain that Rules the Solar System, compared to whom you are as an insect"?

The comments to this entry are closed.