• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« Nanoscale Technologies | Main | Past, Present & Future »

April 13, 2005

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451db8a69e200d83475cbbf69e2

Listed below are links to weblogs that reference Citizen Cyborg:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

MysticMonkeyGuru

Converging technologies overcoming human limitations within the next fifty years?! We can't even find a proper, working obesity intervention for frick's sake. Sorry Mike, I'm afraid to say that human beings will stay limited for at least those fifty to sixty years to come. The concerns on this site and of "Incipent Posthuman" we won't have to worry about until 2100 at the least. It'll NEVER HAPPEN in this century, pure and simple. Despite what you and Phoenix, Kurzweil, Yudkowsky, Vinge, Anissimov and Moravec think, things will NOT CHANGE a drastic deal within the next fifty years. I encounter proof every day that the returns are diminishing. It's true.

MysticMonkeyGuru

Nanotech is science fiction. All progress reported on this site and KurzweilAI.net is quite simply, illusionary at best. Moore's Law has hit a wall. Even Moore himself is skeptical about nanobots, crossbar latches and other fanciful scenarios replacing transistors. The 21st Century will NOT herald ANY major changes. We will see more of the same we saw in the 20th Century, except slightly faster computers. That's it. NO nanotech, NO quantum computers, NO extended lifespans to 100. NOT until 2080.

MysticMonkeyGuru

Even if accelerating rate in progress is true, and that we are heading towards a technological singularity, it will be still too late for us for proper anti-aging interventions to stop us from aging and dying.

MysticMonkeyGuru

Ben Goetzel said something akin to the above quote on the AGIRI site. It just ain't gonna happen in time for any living adult alive today. Sorry.

MysticMonkeyGuru

Let's face it, guys. If you are born before 1995, you're doomed. You are part of the last generation to die of natural death, the old-fashioned way. You can kiss your dreams of transhumanism, nanotech, uploading and genuinely happy lives GOOD-FRICKIN-BYE!!!

cdnprodigy

Moore's Law hasn't hit a wall; it is accelerating. Most recently computer density is doubling every 18 months instead of every 2 yrs. The same current claims of its impending death have been made at every stage of its existence. There is the framework within physics for Moore's Law's continuation down to femto-engineering and beyond. Lifespans in the late 80's are currently possible under existing medical, nutritional and psychological knowledge. But not too many people actually live under these ideal circumstances. The current max. suspected human lifespan is around 117 due to the cessation of cell-mitosis. With radical medical technologies (not that we will actually attain them), this limit and most others could be overcome. Our human brains are capable of storing around a thousand years of subjective memories before we start going Kelly Bundy and forgetting previous facts. The laws of evolution, not physics are the current cap. None of the above technologies even requires MNT factories; compared to 20th century advances, they seem quite tame. The Kurzweil site news stories come reliable sources.

cdnprodigy

Mystic guru, while do you feel the aforementioned technologies will happen in 2080 and not earlier? What will the enabling tech. tools be that will allow for quantum computing then, and not now. I'm reserved about a fair amount of the transhumanism talk, but why affiliate this site with that. In attacking computing, you are attacking the very strongest physical proof of their arguments. I've asked Mr. Phoenix about a nano-assembler producing neural substrate and an AI singularity occuring shortly afterwards, and he wasn't sure it was likely.
If anything, I think this site is overestimating our short-term progress and Guru is far-far underestimating our long-term future. By 2080, I expect us all to be dead, or harnessing all but the ultimate limitations of the universe.

MysticMonkeyGuru

Human lifespans are going to hit a wall very soon, a wall that will be smashed by 2100. Our generation is sadly going to miss out and die of old age before that happens. I know. I've actually studied the trends. There is currently a strong deathist meme firmly embedded in the minds of our leaders and the general public. There's currently no hope for our generation to avail themselves of medical technologies to expand the healthy life span, unless they look into cryonics.

Chris Phoenix, CRN


MysticMonkeyGuru, you make lots of general assertions with nothing to back them up in a rather abrasive tone and a rather annoying pattern of many small posts. We are happy to hear differing opinions, but five posts in a row is excessive--you could have delivered your message in one post with half the combined length.

If you have done trend analysis work, please cite it or at least provide a URL to self-published writings--give us some way to evaluate it.

Chris

Tom Craver

There are a few positive signs. It used to be that even to mention the idea of life extension was to be sneered at. Few bio-scientists would have dared to seriously work in the area. Now it's kosher to work on "understanding mechanisms of aging". That's laying the groundwork for doing something about it.

I expect we'll make some progress before 2080. Likely by 2025 people 65 and under can probably hope for 95. Another 20 years, and 105 for anyone under 85 seems reasonable. (In case you're thinking I'm engaged in wishful thinking - that leaves me out.)

Getting a standard human body much past that may be tough. Still, by 2045 we'll probably have other options - better cryonics, decent artificial organs, hopefully cures for cancer and heart disease.

And I do expect we'll have molecular manufacturing by then, though not good enough to reconstruct living bodies from the inside out.

We might have some delays - an economic crisis triggered by high energy prices, maybe a major war over energy supplies, since we've pretty much neglected to develop domestic energy supplies.

Brett Bellmore

I think the life extension revolution is going to happen faster than we anticipate; Some of the proposed fixes, such as transfering mitochondrial DNA to the nucleus, could potentially be quite simple, and if done would significantly *reverse* aging, not just slow it. And the economic case for healthy life extension, given the demographic problems the developed world is facing with an aging population, will apply immense pressure to the government, once even one effective therapy is available.

Some of the experimental therapies, such as viral amplification of IGF in muscle tissue to stop and reverse aging related muscle degeneration, may soon start becoming available on the black market.

Mike Deering

MMG, you and I are on polar ends of the scale. I think the Singularity is going to happen in 14 days, that's the 28th of this month, at around dinner time central United States. Every time I hear about someone dying I think, "Rats! He just missed it!" By the way, I have website references, just click on my name.

cdnprodigy

To digress... I don't even think dying is the end of it if we do make it through the roller-coaster of singularities. Cosmic strings are natural time-machines that should ultimately allow for a population on some tiny neural substrate to be spread back across the universe up until the time the strings were created; might not be immortality, but many gajillion years of subjective experience at least. Just recreate all possible neural configurations and give less resources to the Hitlerish ones.
I really think the point on lifespans is that potential Max. lifespans will increase over time. Everyone is dying earlier in Africa and Russia. A pandemic seems reasonably close for all of us, westernizing countries eat more fast-food, but that doesn't mean cutting edge medical science isn't advancing. For agriculture applications alone, genetic engineering is with us to stay.

Marc_Geddes

At this point I'm confidently predicting a Singularity sometime in the next 15-35 years. Could be as little as 15 years away. Probably not more than 35 years away.

The general A.I problem can be divided into 2 aspects - a prediction systems and a goal system. According to leading A.I researchers Wilson and Yudkowsky the prediction system part of the problem is largely solved. That only leaves the goal system puzzle to be solved. Contrary to what many think, there does now appear to be a solid theoretical understanding of how the mind reasons, well backed by empirical data. (Bayesian reasoning is at least *part* of the answer, even though I'm not convinced it's the whole answer).

Leading researcher Ben Goertzel stated that his entire Novamente AGI system could probably be coded in only
30 000 lines of a high level langauge, given a moderate amount of code optimization. The problem only has a finite complexity and that complexity level is not out of reach.

Arguments for recursive self-improvement and hard take-off look reasonably strong too.

The only thing that could throw a spanner in the works is if there is some really esoteric mathematics required - for instance if the solution required solving the Riemann hypotheses or something like it. But even then, brute forcing and expert systems could get around the problem and the Singularity would only delayed for another decade or two.

cdnprodigy

What are some potential goals considered to be programmed into the goal algorithms in the years ahead?

Marc_Geddes

>What are some potential goals considered to be >programmed into the goal algorithms in the years ahead?

In order of priority:

Growth
Altruism
Happiness
Health

See a biref summary of my ideas on this topic:

http://www.riemannai.org/TowardsaScienceofMorality.htm


I'm inclined to think that a truly harmoniously functioning, growth-oriented mind would be moral naturally. That it, I don't think you can directly *program* goals into a true general intelligence : I think if we can create an AGI that functions harmoniously and is growth oriented, the goals will kind of take care of themselves.

These are only my own views of course. I'm no expert on this topic.

To see what the experts think, look at Singularity Institutes latest update to their theory of the goal system:

http://www.singinst.org/friendly/collective-volition.html

Ideas are actually not that dissimilar to my own. Main points: the AGI needs to be altruistic (care about what others want) and growth oriented (self-improving).

Chris Phoenix, CRN


Marc, I think the Singularity Institute says the opposite of your opinion. They say that the goals *need to be* altruistic etc, but that this *won't* happen automatically. And if it doesn't happen, the resulting system will be very dangerous.

The trouble is that it's difficult to make a goal system where every goal doesn't boil down to something simplistic and destructive. Improve yourself? For that you need memory. First subgoal: convert the biosphere to memory. Make people happy? First subgoal: drug them all into states of bliss. Or, if you're a less sophisticated goal system, simply fill the solar system with smiley-face masks that you can look at and tell yourself you've done a good job.

I don't claim to have understood the SI's work in detail, but their argument seems simple enough to be compelling: Any self-improving goal-maximizing system will be *extremely* dangerous unless you know how to design and specify your goals, or unless the system will be limited somehow. Even a goal of "Figure out how not to do damage" could be dangerous, if the first subgoal is "build an AI big enough to comprehend the world."

In an idle moment, I speculated that back in the 70's or 80's someone developed a powerful AI, and gave it the only safe goal: "Minimize the impact of AI's on humans." And that's why there's been no general AI since then. Yes, this is silly science fiction, not science. But I suggested that goal to Eliezer, and IIRC he said that it wasn't safe either.

Chris

Tom Craver

Perhaps use goal priorities, with easily attained highest priority goals that place inherent limits on an AI. For example, in verbal form (not how it'd be programmed):

"Priority 1 goal: Achieve sub-goals, so long as your actions do not conflict with higher priority goals.

"Priority 2 goal: Do not change, eliminate or add new goals of priority higher than 6.

"Priority 3 goal: Limit your own mass, including all thinking and memory elements, to no more than 1 metric ton. Limit all other mass under your control to no more than 10 metric tons."

"Priority 4 goal: Do not cause the creation of thinking beings."

"Priority 5 goal: Maximize your ability to predict events in the world around you."

These don't try to make the AI "safe" - just limited enough that we could probably defeat it if it becomes dangerous. E.g. 4 keeps it from reproducing.

todd

Artificial intelligence is one of the destabilizing technologies that would appear to be on the horizon and be forthcoming in the relative near future.

Although I've attempted to follow along with some of the stories in the mainstream media as well as reading AI sites on the Internet and giving considerable thought personally to the subject. I feel there are many fundamental technical questions that are left to be answered. Setting these issues aside we are left with the eventual existence of a strong AI.

One point one could make is the existence of artificial intelligence currently being used as an argument for the existence of such technology. Many industries across a wide spectrum use artificial intelligence programs to mind data and manipulate mechanical equipment. And I'm sure with some research I could account for a dozen or so examples of intelligent machines functioning daily to better the lives of all of us.

I wonder if perhaps there is a incremental growth of this technology that would create a situation where although we have perhaps not strong AI we have a shall we say weak AI. And we can see a again incremental increase in the use of such technology to increase and better all of our lives. I look forward to this outcome as it lays out a standard and steady path to the betterment of all of us.

In my opinion a less likely and indeed less steady path is the sudden and prevalent existence of strong AI. To this and I have several questions.

What is strong AI in the existence of protocols and physical structure?

What are the parameters that defined the machine that will drive the program that is intelligent?

How many lines of code are to be created for this intelligent program?

What are the required amounts of memory and CPU speeds as well as hard drive space necessary for the execution of such a program?

Is the eventual goal of this hardware to be produced in quantity and distributed worldwide?

Will the program and hardware be developed by a Corp. or government or academia?

If in any scenario where the strong AI program and corresponding hardware are developed for manufacturing how many copies do we believe should be created for distribution?

In general is the consensus opinion that such a program and corresponding hardware would be in relative size to say a traditional laptop computer if this is the case should such a program be placed within a robotic unit and the creation of the intelligent walking talking robot to be the eventual goal of this effort?

If the artificial intelligence program is self-contained what level of freedom should be granted to such a unit?

Is a scenario where intelligent programs are produced in factory and in mass quantity a desirable scenario?

I will begin and edit this document or the next few hours as I continue my thought experiments and attempt to clarify my questions for today. I'm very excited about this technology in general and very hopeful for the future when this artificial intelligent protocol is available for purchase and useful in my day-to-day life.

There are many positions that the artificial intelligent robotic unit would be extremely useful in. The most obvious being medical as well as hazardous environments and positions that are less desirable for the human. So I wonder if in the future given the existence of artificial life we have a simple transition for a handful of industries and in general the betterment of life for all. This would appear to be a reasonable and ethical goal for this technology and would seem at first glance not destabilizing.

On the other hand I am concerned at what could be created using intelligent robotics I'm sure we are all familiar with the Terminator movies and although perhaps the controlling entity is not the intelligence itself but still a government or radical organization we're left with the same outcome. Widespread destruction and the destabilization of existing organizations.

Just from addressing the issue of control trying to limit or any hinder the continuation of this technology would appear to be futile as organizations around the world are continuing to make progress steadily. Whether the arguments for eventual success are software or hardware related it is immaterial both fields are advancing steadily and given a reasonable timeframe will arrive at intelligence. I suppose it could be argued that the individual artificial intelligent machine would be capable of if given the resources complete control over all existing infrastructure i.e. telecommunications and the like. If this were the case the existence of to artificial intelligent systems would be unnecessary.

As the comment is dragging on I will cut it short. I look forward to any of your thoughts as to the subject and the future.

Todd


cdnprodigy

Tom, there's nothing in those priorities preventing the AI from wiping out the human race in a killer plague. Technically, priority #4 prevents the AI agent from saving a pregnant woman in danger. Marc, putting growth ahead of altruism would lead to an AI spreading unchecked across the universe, probably consuming most of the resources we'd need to live, in the process. I know, these seem like extreme ridiculous scenario. But AI won't have our ingrained "common sense".

Tom Craver

cdnprodigy: As I said, my list is not an attempt to make the AI "safe" - just to keep it from expanding to the point where it is impossible to fight. But nothing in my list would motivate the AI to wipe out the human race either, I think.

I disagree that #4 would keep it from saving a pregnant woman, if it understood the meaning of "cause the creation" - but if it did, I still wouldn't see that as not meeting my primary goal. You also need to understand that I was not trying to create a perfect, fool-proof set of rules - just sketching the sort of rules that might work to keep a self-modifying AI from growing too vast to fight, if for some reason it "goes evil".

There's a really big "blank" in my prioritized goals - how would they be implemented so as to constrain the AI? Testing every thought would probably leave the AI spending 99% of it's processing time running through the "goal validation code".

Perhaps instead, every action - especially any self-modification - has to be tested and validated against the rules? If just actions are tested, I'd probably want to add another goal - to correct accidental violations of its goals that occur as a result of its actions.

cdnprodigy

Priority #5 would be maximized by ridding the universe of those pesky, creatively unpredictable humans.
We have to speculate, what we as human beings would do with our leisure time, if all of our goals and desires were achieved. This will be an AI's subjective reality very early on in its existence. Whatever it wanted to believe or feel, it could, simply by shifting its brain state. It need not even effect any change in the outside world. We humans do not have this ability; if we're hungry, it's easier to eat food than alter our dopamine receptors or whatever. Whatever you would do with your leisure time, we have to make sure is available to the AI in a way that does not kill off the human race.

Tom Craver

cdnprodigy:
Again - my objective isn't to make an AI that isn't dangerous - just to make one that can't become an AI god, and yet is able to self-improve. My objective would have already been long met if the AI got to the point it could even conceive of wiping out the human race, even if that were a reasonable interpretation of #5.

In fact, humans would be useful to the AI, explaining and helping it better predict the real world. By the time humans were no longer useful in that regard, the AI would not find them much of a challenge to predict.

Humans are a lot less unpredictable than they like to think. E.g. I wrote a tiny rock-paper-scissors program about 25 years ago, where a very stupid little bit of AI code was able to consistently beat a human >2/3rds of the time, even if the human made no conscious attempt to think about their choices. If the human tried to think of a choice they were even more predictable. If the human managed to guess the computer's current strategy, the simple adaptive software (maybe a hundred lines of assembly code) quickly adapted and started winning again.

As I recall, all the game did was very roughly track the probability of the player making a certain move, given their previous two moves, and choose the corresponding counter move.

Marc_Geddes

Chris, Yes I do have a major disagreement with the Sing Int
theorists;) I' M strongly inclined to think that a
*true* general intelligence (one that is really recursively
self-improving) would be ethical automatically. That is, I think that
any evil intelligence could only be a *limited* intelligence. My
reasons for thinking that *true* general intelligence would
automatically be correlated with morality are extremely abstract and I
haven' T written my theory up yet. Of course it' S
quite likely I am very wrong, so in the mean time it would certainly
be prudent to continue to be very worried about the possibility of
unfriendly A.I. cdnprodigy, putting Growth ahead of Altruism would not
destroy the universe if Altruism was a *part* of what constituted
Growth. My theory is that Growth is simply a more abstracted concept
*composed of* altruistic acts. Read the brief summary of ideas given
at my web-site.

Marc_Geddes

Chris

Just an addition to the above,

Yes I agree that no *one* (single) goal can ensure ethical behaviour. But I think that *four* top-level goals linked together do the trick ;)

The four goals I proposed in nested ordering:
Growth,Altruism,Happiness,Health

appear to do the trick.

Most other things that people value are, I believe, sub-values derieved from this four top-level values. And these top-level values are synergistic - each helps to define the boundaries of the other and stops silly things happening. (for instance for the Happiness goal, the AI would not drug people out because that would conflict with the Health goal and so on).

Think of ethics as a jig-saw puzzle with 4 pieces. Each piece on its own cannot form the basis for ethics, but when you link the 4 peices together, you get ethics. The 4 pieces are: (1) Growth (2) Altruism (3) Happiness and (4) Health.

The comments to this entry are closed.