• Google
    This Blog Web

October 2011

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

RSS Feed

Bookmark and Share

Email Feed



  • Powered by FeedBlitz

« Governing the Global Village | Main | The Human Problem »

November 11, 2005

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451db8a69e200d8345ad2a969e2

Listed below are links to weblogs that reference The Need for Limits:

» Winner take all and Molecular Manufacturing from Miron's Weblog
I posted on a discussion about the problem of winner-take-all and MM. ... [Read More]

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Rik

Chris,

I think there may well be a period without limits, like the golden age of napster. Anything you wanted was available. But out that situation grew awareness, regulation (not very succesful) and legal competitors. With MM it might be the same.

May I suggest, btw, that you are in 'peggy noonan'-mode? You want to put all of this on people's radar, but is that really a good thing? I'm not suggesting you should talk to an elite instead, but perhaps all is not as bad as you seem to think it is.

I don't like Dixon's suggestions for a world government. I think it'll fail. Jamais Cascio - of WorldChanging.org - has a podcast on IT Conversations, where he talks about a "participatory panopticon", as he calls it. I didn't write about this already, did I? Sometimes I just forget. Anyway, Cascio thinks that, certainly in the next decade, we'll grow into a situation of perfect memory. Everything we say and do will be recorded. Of course there will be a Big Brother, but as a counterweight even more Little Brothers and Sisters. As Cascio puts it: "Who will watch the Watchman? All of us." The idea that people will happily watch one another is scary - because it's a world where lying is hard and privacy and intellectual property do not exist in their present form, to say the very least - but it's probably preferable to a UN-style world government. I would.

Mike Deering

Chris, you have made a good point. designing a system of limits is going to be harder than designing the technologies to enforce it. The problem is that we are not all the same. One person's reasonable limits are another's oppression. In the 1950's in the USA, we had a system of morals, engineered and communicated by the then new technology of television. In this system everyone was heterosexual. This was okay with most people, but was very oppressive for some who had a strong tendency to homosexuality. You have to make everyone the same for any system with limits to work. And doing that would be a crime in most people's eyes.

Society is moving in the direction of an ideal freedom. It's not moving fast, and there have been a few bumps in the road, but it is moving. The ideal freedom is one in which everyone can do what they want as long as they are not hurting anyone else. This is a society that emphasizes personal responsibility in that the system, government does not protect people from themselves. There would be no victimless crimes. There would be no laws controlling what you could do with or to yourself. You would own your own body, not the state. The FDA would make recommendations not regulations. Prostitution and drug abuse and suicide would be legal.

How do you make a society of perfect freedom safe? You have to have no crazy or evil people. How do you do that? You could just make then all nice sane people through technological intervention. We will soon have the capability. But this would violate the perfect freedom. Another way is to remove all of the causes of insanity and evil, and provide the environment to nurture kindness and sanity. This would would also. We will soon have the technology and the knowledge to make this work, but it would take time. You don't replace a desert with a forest overnight doing in naturally. During the transition there would have to be limits that would violate the perfect freedom. We can do this. People are not born killers, or if they are, this can be identified and corrected while they are young.

Mike Deering

"The idea that people will happily watch one another is scary - because it's a world where lying is hard and privacy and intellectual property do not exist..."


What is scary about not being able to lie, and not having any privacy, and not being able to prevent others from using certain information? None of these things seem scary to me. Just because something scares someone does that make it wrong? Is the fear rational? Some people are afraid of spiders too. I would be more interested in hearing what practical problems would be created by the participatory panopticon rather than what fears it would cause in some people.

michael vassar

Chris, I think that this may have been a stumbling block between us in the past. I had no idea you were thinking in terms of a MNT admin system existing in the context of other more powerful systems. There still may be another important stumbling block. You said "Others have proposed building an AI to govern us -- though they have not yet explained how to design limits into the AI". While technically true, for some of the "others" referred to, I have made AI proposals where I do specify how to design limits into the AI, and the singularity institute ( www.singinst.org ) has compellingly demonstrated that for most classes of real AI limits are completely impractical (humanly impossible is putting it mildly), and that for some very small sub-classes of AI limits are unnecessary as well.
I agree that a very small government of mutually transparent people focused almost exclusively on MNT admin may be our best, though very imperfect, option. I'm not sure of what you mean by "problems" though. Do you mean that in your opinion it's not ideal, not workable without a more detailed design and more preliminary research, not workable in the long term, or not workable period?

Brian Wang

How much insight to creating a stable system can we get from gaming theory and other mathematical theory ?

Here are some of my early thoughts.
Although there maybe too many interrelationships ... we can propose some kind of controls against extinction and controls against oppression etc... and mathematically hypothesize the effectiveness of each control. Say we have a control against extinction. It is estimated at 99% effectiveness. then it has a 1% chance of still allowing someone to cause extinction a type I error. But then the tighter we make the control the more we increase the risk of a type II error over-controlling the innocent.

http://www.intuitor.com/statistics/T1T2Errors.html

Multiple controls are needed and systems for redressing the type I and type II errors before they actually cause irreversible damage.

Also, there is the issue of confidence levels for the estimates.

In gaming theory, we would have to be able to modify the analysis to include individuals being able to make multiple moves (operating more quickly with MNT).

http://www.kli.ac.at/theorylab/Keyword/E/EvolyGameThe.html

We also have to priortize the things to protect. Perhaps

1. Extinction
2. Complete oppression
3. Falling behind/losing control to a dangerous group
4. Genocides
5. Ability to continue to advance
6. Various freedoms

etc...

We also have to reality filter the controls and activities against what is current reality and likelihood of implementation.
ie. no point pushing a proposal that will never be implemented in a useful timeframe. Although you could think it up and publish it if you really believed it...you should also have some easier to digest and implement alternatives that can help the situation.
Similar to you do not have enough money for retirement but you save and invest as much as you can in the meantime and you try to lower expenses and stay close to friends and family to mutually assist each other etc...

Some additional devices on a nanofactory .
-open monitoring
-Multiple parties monitoring each one
-Public broadcasts of usage records
-Built in alarms
-Self destruct triggers

There are many players in the world now and globally applied rules and globally acknowledged organizations are limited.
More creative thought should be put to generating something relatively stable from this situation (the one that exists) then calling for a system which does not exist.

Also.. I question the assertions ...First, such a competition would probably be unstable, winner-take-all, and end up in the same massive oppression that we're all concerned about. Second, the contest would quickly shift to computer-assisted design and attack.

Also, how long do things the greatest risks to stability ? Until various powers and peoples are dispersed throughout the solar system and we have better anti-extinction capabilities. Also, we can have cooperation between smaller groups and peoples to balance one player that is getting ahead.

jbash

I'm sorry... I think there's something in there worth responding to, but I can't pick it out very well. I think you're skipping steps and/or digressing.

I'm one of the people who's (recently) been arguing against the idea of a single, central control system, not only because I think it'd be bad but because I think it's unimplementable and distracting. That doesn't mean that I don't think there will be, or should be, systems larger than a single actor that limit the power available to most or all single actors... just that I don't expect or desire just one, unified, coherent system of limits, and I don't expect the systems that do exist to be nice, pretty ones with hard-and-fast rules, lots of formal institutions you can point your finger at, and a clear hierarchy of authority.

I'm having trouble extracting exactly what you're worried about from "There will be a few spoilers/cheaters and everyone else will follow them straight into the pit.". In what way will they cheat, and into what pit will they be followed? I certainly would expect bad actors, and given the power of the technology, I expect those bad actors to be able to do great damage. I don't see everybody else copying the first nano-terrorist; on the contrary, each of them as an individual would have great incentives to work with the others to keep the terrorism down to a dull roar.

I'm also confused when you say that MM will be "the biggest system". I don't see where it's a system at all. It's a technology. If you're saying that it'll be the biggest source of *power*, you're probably right, barring self-improving AI. I don't think that means that it's correct to think of it as a system.

Chris Phoenix, CRN

Wow! That's a lot of great stuff to respond to.

I'll start with jbash. I'm not saying that we need a coherent system of limits. I'm saying that we need to have *some* limits. They need not necessarily be a system; previous limits were in general not systems, but things like resource constraints or speed of travel or geopolitical actors.

If you can come up with a non-systemic source of limits that can't be bypassed by an MM-enabled government, great. I'm not seeing it. I'm also not seeing a stable systemic source of limits.

Re the slide into the pit, I mean that the bad actors (or defectors, in game theory) will incite other defectors. Violence breeds vigilantism. Cheating breeds lying. Etc. I'm pretty sure that terrorism would not be the only kind of bad action.

Brian: I agree, if we can make it into the solar system, speed of light may impose a useful limit. But I'm not sure about the interplay between scarcity of matter and gravity wells; it may be impractical to live out there. I'd feel better if we had a propulsion technology that didn't require reaction mass, though I suppose ion thrusters will probably be good enough. Power is another consideration.

As to your other points: Again, I'm not proposing a system, or even saying that we will need a system. I'm saying that we will need limits and I don't know where they will come from. If I'm wrong about winner-take-all, then that will provide some limits. So feel free to question that assumption. But I'm pretty sure that offense beats defense; that inventiveness makes for rapid power shifts; and thus a winner who tries to take all will succeed, but a winner who doesn't take all will quickly be overthrown.

Michael, I guess I missed the part where SingInst had figured out how to design a friendly AI.

Rik, I see the participatory panopticon as a variant (or extreme) of reciprocal accountability, and subject to the same problems. I'm not sure what you mean in your "peggy noonan"-mode paragraph. Are you saying that it's better not to talk so widely about this problem? Or that the problem is not as bad as I think? Or both?

Chris

Mike Deering

Michael Vassar wrote: "...and that for some very small sub-classes of AI limits are unnecessary as well."


Actually, the definition of the sub-class is the limitation. I assume you are talking about Friendly AI's as the sub-class. Then being friendly is the limitation. being unfriendly would be exceeding the limitation. A FAI is not an unlimited system. It has a very severe limitation upon it, being friendly. That means there is a whole lot of things it cannot do. To say that it doesn't want to do unfriendly things and is therefore not limited is just moving the limitation to another area of its design.

michael vassar

Obviously no-one knows how to design a FAI or it would have been done. People have figured out (hell, I had figured out before I had heard of "singularity") that whatever limits you impose on an AI will be irrelevant once its much smarter than you, and that whatever limits you build in will be irrelevant once it understands itself reasonably well, (and also that it must be understood reasonably well to build in limits that do what you want them to in novel situations), but that you don't need to give an AI limits to make it safe, you need to make it devote its intelligence to goals which are compatible with your safety. The checks on a system's pursuit of its goals are its limitations. The causes that lead to the particular goals a system pursues are its history or its description, not its limits. Goals are also not limits on an intelligence. In the absence of goals, intelligence is an incoherent concept. Being Friendly is not a limitation in an AI any more than the absence of a desire to eat Doritos or to pray to Jesus is a limitation in a human. A human can still eat Doritos or pray to Jesus if it wants to in pursuit of some goal important to it (politeness prehaps) and a FAI can still do any of the things a UFAI can do (say, convert large amounts of matter to computronium) if their is a Friendly reason to do so such as conflict with a UFAI. Actually, the FAI is LESS constrained than the person. The person can eat Doritos or pray to Jesus, but he can't want to eat Doritos or believe in Jesus, even if there is a rational reason to do so in terms of his existing goals and beliefs (probably involving mental probes and torture?). A FAI could in principle do anything, even become a UFAI, and would do so without reluctance or any other emotion if doing so was actually advantageous in terms of its existing goal structure (for instance, recycling/converting to energy matter on which Friendlyness content was stored in order to run some positive utility [to it] simulations and becoming a "simulation of X" maximizing UFAI with its last zeptoflop as the universe ran down)

Karl Gallagher

First, such a competition would probably be unstable, winner-take-all, and end up in the same massive oppression that we're all concerned about.

I don't see any way to justify that assumption. Entropy-free game theory sims wind up like that, not the real world. Bad actors create coalitions to stop them. Super-weapons are stopped by innovative use of low-tech ones.

The most dangerous scenario I see is one where world leaders are convinced by your scenario and act accordingly. That's a good way to recreate World War One--frantically rushing to be the first to attack, and only realizing defense dominates after the first million troops die.

Returning to your essay: There will be a few spoilers/cheaters and everyone else will follow them straight into the pit.

The human race has had many spoilers and cheaters throughout history. Some have followed them. More of acted to stop them, done other useful things, or cleaned up their messes. That's how we got to where we are today. It was ugly, messy, slow, and turbulent. The future will be too. Try to not make it worse by panicking.

Phillip Huggan

Brian, in the short-term, dispersing throughout space might be an anti-extinction strategy. But if different communities in space use different security protocols for their technologies, it becomes almost a certainty one community will breed AGI or a General Relativity weapon and poison the aquifer for all. Maybe FAI developed first could physically tile the universe to prevent this, maybe not. Depends on physics.

I would prefer to monitor the environment rather than individuals, but maybe a Particitory Panoptician will be better/easier. I don't see any PP enforcement mechanism explained, this is the tricky part. People will still murder (not as many though). More importantly, MM personnel with nanofactory access or even a tyrant who somehow gains control over a major military power's forces, will not care if there are cameras everywhere. It is in the drafting of laws and in the enforcement of them where details are needed.

I view MM "disarming" as a way to buy existing society enough time to mature enough to responsibly handle this brave new world. Assuging extinction threats are the only "laws" I would prefer to immediately administer with MM. A MM product library, weaponry, sensors, even the humanitarian aid distribution I hold to be so righteous; all are merely disarming means.

Tom Craver

Chris:

Any approach that gives one entity the power to enforce order and security, invites oppression and revolution - creating the very risks we want to avoid.

The answer is to put an single *precept* in place as the most powerful element - NOT a group or person or AI.

The Fundamental Precept must hold "power" simply in that virtually everyone agrees to uphold it. It must be made known all over the world, along with the explicit responsibility of all individuals, nations and organizations who agree with it, to enforce and defend it.

It must be something that nearly all will support - so it must be so simple that people say "Well, of course - isn't that obvious?" To which the answer is - "Yes, except that now YOU are personally responsible for insuring that all will abide by it."

I propose that the Fundamental Precept is:

"Free-willed Humanity must survive".


Yes, that leaves open questions (What is humanity? What is free will? Survival?) - but we share enough common understanding that the details need not be 100% agreed upon, for the precept to be useful.)

Chris Phoenix, CRN

Tom, interesting idea. But the corollary of "Free-willed humanity must survive" is "Existential risks must not be tolerated." And a corollary of that is "Super-powerful non-human mind-children (transhumans or AIs) must not be tolerated" (because of what they might do to humanity). But a near-certain effect of free-willed humanity is super-powerful mind-children.

Chris

Chris Phoenix, CRN

Karl: One of the foundations of my concern about MM is that it will be nearly entropy-free, in the sense that you get what you want with very high efficiency. Hence my concern about lack of limits. If there are few inherent limits on what an actor can do (feedstock, power, information); and few external limits (offense beats defense); and their internal limits can't be trusted... then everyone will have a strong incentive to reduce the number of actors in the game. Any actor or coalition in a winning position will have a strong incentive to strike. Asymmetric warfare just makes it worse.

I agree there's a risk of world leaders following my logic. If someone can show that it's wrong, please do!!!

And I agree that in WWI, people didn't realize until too late that dirt was a great defense against artillery. But by the time WWII came along, the only defense against airplanes was more airplanes. I have tried to figure out whether defense will be successful in an MM world, and my best guess is that it will not be. There will be too many ways to attack, and defense will require too much energy and too many restrictions. It will be difficult (though certainly not impossible) to completely destroy an entrenched power, but it will be easy to destroy most things that power cares about, and easy to destroy its ability to win an all-out conflict in the short term.

MAD won't work; there will be too many actors, and too much chance of a flareup or a mistake.

One thing that might, maybe, work is a specialized institution maintained for the purpose of generating and disseminating information on defensive technologies. This could provide a stabilizing influence, and all actors would have a motivation to maintain and enhance its effectiveness.

As to whether people *left to themselves* (my original qualifier) will enter a downward spiral: Through most of history, people have not been left to their own devices. There has either been an evolved social order or a forceful government--sometimes both. When social and political authority weakens, people do all sorts of destructive things. I recently read a rather scary list of survival tips from post-collapse Argentina. Things got real bad, real fast.

http://www.peakoil.com/fortopic14183.html

Also look at how quickly urban disorder grows when police are seen to be even temporarily impotent.

Chris

Miron Cuperman

I think that Chris may be right that offense will trump defense with MM, and I think this is the root of the instability he sees.

I think that controlled nanofactories are likely to be oppressive. Why would the general population trust an elite that can build unrestricted tools, while they only get a watered down version? They would suspect, rightly, that the elite will consolidate and extend their power over the masses.

The only way I see out of this dilemma is to make defense stronger than offense.

The way to do this is to move to a more robust substrate. i.e. uploading into a massively redundant computer network. Baseline humanity is too fragile to defend against MM attacks.

To get there without passing through dangerous territory, we will have to focus R&D on certain pathways. These would include:

- Computation
- Brain research
- Brain scanning
- Active shields

We would have to put enough resources into this research so that others do not have time to develop offensive technologies.

What do you think?

Jannis

Interesting stuff, so far I had not idea about nenotechnology, finally now I start reading a German book about the subject.
In fact, I have started my own Blog on globalization, it focuses rather on European and International politics, new ideas, creative tools, marketing and networking. Perhaps I could learn more about nanotechnology and then present the basic principles on my blog
http://ideenwerk.blogspot.com ?
Could you send me some information on the subject or just suggest me relevant sources ?
In any case, I would love to receive your views and comments...

Thank You,
Jannis

Rik

Allright, the 'peggy noonan'-mode wasn't the smartest thing to say. I didn't mean you/we shouldn't be discussing this, just that 'we' should avoid gloom & doom. Yes, bad thing may be coming, but then: aren't bad things always coming? Sorry about that, but it seems so self-fulfilling fatalistic.

Anyway, I've thought some more about limits. Is it possible to turn the development around? I mean, can you have, say, body augmentation first and MNT or MM later? I don't mean body augmentation in the sense of increasing intelligence (if that's possible) or brainmodules, but what's now cosmetic or done by plastic surgery. This isn't as disruptive as MM. It will 'merely' offend people's sensibilities and freak authorities. I believe there should be elbow room to play and experiment. And the more people do something to themselves, the sooner the msm will pick it up. (So you need a product asap that's not in the pants-category.)

As soon as people realize what the potential is (try thinking of north-korean mind-children), there'll be a backlash at first and sensible talk later. I hope.

Michael Anissimov

The only imaginable advance with greater consequences than MNT will be the creation of superintelligence. MNT will enable extreme power; this is by the nature of the advance, and there is nothing we can do to change it. Superintelligence will enable even greater power; this is by the nature of the advance, and there is nothing we can do to change it. Both advances are extremely risky, yet unavoidable. If we successfully introduce MNT without a disaster, the risk of creating superintelligence still remains. If we successfully create superintelligence "on our side", we've done our absolute best to manage MNT risk.

"Designing limits into the AI" is not an all or nothing thing. You work on a theory until it can be used to create desirable limits. Ultimately, this is the most important thing to be working on. Any solution involving solely human-level intelligence is a stopgap measure at best. Creating benevolent superintelligence is a panacea because it can discover and implement all the solutions you were too dumb to think of in advance.

jim moore

Limits on military action?
If you assume that one group of people attacks another group because the first group believes that they will benefit from the attack, the question becomes how can we make universal the knowledge that you can't gain from a military attack on another. With nano-factories giving you (and your opponent) rapid prototyping and instant scale up in production your ability to know how your opponent could strike back is greatly reduced. A fundamental limit might be your inability to know what your opponent is capable of doing.

I am optimistic that we can create a situation in which everyone knows there is nothing to be gained from a military attack and there is much that can be lost.

Dan S

A few notes on restrictions and limits…

First of all, the word "restrictions" should not be used: it is misleading. It has negative connotations, "restrictions" assume limits to someone’s freedom and creativity. Thus there always be people to fight against any kind of restrictions trying to expand limits. I think, there should be no limits.

However…

Instead of word "restrictions" I will use word "rules", because

1. The purpose of rules not to prevent someone from doing something but to provide a basis for system function and development. No system can exists without rules (Consider a language system for example, there are "grammar rules" not "grammar restrictions" or "grammar limits")
2. Rules have no absolute value. Any self-consistent set of rules makes sense on its own. Anyone has an option to create a new set of rules (if you don’t like language grammar you can choose another language or invent you own)
3. Two or more set of rules must not overlap: No one can apply rules from one system to another (no one can directly apply Chinese rules to English and so on…). To do this one must create a new system with new set of rules.

The basic freedom is the freedom to choose arbitrary set of rules (This statement imply existence of a few metarules. However Metarules would not impose additional restrictions: Metarule does not limit what can be done, it only defines how it should be done). Today’s world already provide a limited ability to do that, in post-MNT world this can be fully implemented.

One can ask how to implement this for real? There are number of options. I would not consider them here, so this post is not too large. If we agree on basic principles, we can start thinking on design of implementation.

Chris Phoenix, CRN

Miron, I think the courses of research you suggest are good for a different reason as well. Many people would regard uploading as a fate worse than death, so I'm not sure that's a workable response to nano risks. But if we know a lot about brains, human-based organizations, and defensive technologies, we may be able to design a stable and predictable system of governance that is sufficiently reassuring to change the first-strike calculations.

Rik, I wasn't offended by the noonan comment, I just didn't understand it. As to whether we can get through a backlash to sensible discussion: I'm still waiting for a sensible discussion on cloning.

Michael Anissimov: I disagree with the word "benevolent" when applied to superintelligence. Anthropomorphizing any kind of AI (except possibly a neural brain sim) will be a mistake. A sufficiently powerful but unwise AI that intended to be benevolent might kill us accidentally. As I understand SI's argument, the danger comes not from the "intentions" of the AI, but simply from its use of a goal system and lack of limits.

Jim, I agree that uncertainty about what the opponent has could be a stabilizing factor. If surveillance beats bug-sweeping, then that uncertainty could be reduced. Or, an ongoing campaign of probes could force an opponent to either show their hand or be nibbled to death, while still allowing plausible deniability of intent to start a war.

Hm... If I had a weapon-designing computer, and it was creative and skilled enough to design weapons that would work without testing, and it told me what it thought it could do but not how, then I could tell my enemies, "If you attack me, my computer says it'll be able to do $50 trillion of damage to you using overseas nanofactories under its control. It implies that some of the weapons are relativistic." If there was no way to verify any of this *except for the competence of the computer program* then that might deter attacks. However... that gives computers the ability to do $50 trillion of damage whenever their programming is triggered. Not good at all!

So, I'm not optimistic that we can create a verifiably stable situation.

Dan, the problem with rules is that people will frequently be tempted to break them for short-term negative-sum personal gain. If people were willing to follow rules simply because the rules were acknowledged to be good, then Communism could have been an economic success. I'm also thinking about William Calvin's description of how *half a day* of a police strike was enough to allow a large breakdown in law and order. Not to mention the recent rash of corporate scandals.

Rules are great, but without some force to back them up, they just don't work for humans.

Chris

Phillip Huggan

Michael, MNT can be used by humans to end renegade UFAI research efforts. One of the first actions of a "friendly" AI will likely be to develop MNT and use it to halt UFAI research efforts along with MNT research efforts. Humans can achieve this to, albeit with reduced efficiency and likelyhood of success. Under the shaky assumption that MNT will be used to limit immediate threats, AGI research in such a post-MNT world could procede more slowly and securely (with few infrastructure limitations and without a lack of personnel) in a way designed to maximize the odds it will be FAI and not UFAI.

Rik

Chris,

Darn. You sure are making this hard. Rules, limits, restrictions... all very nice, interesting proposals, but nothing seems to work, except for an AI. I'm not sure if I believe in the feasability of an AI; looking at the speed of development in 'printing' stuff (organs, bones, meat, OLEDs... what next?), it's more likely that desktop factories come first. Meaning you'll have a Napster-periode on your hands. Which makes the first CRN scenario suddenly a whole lot more probable.
You could, of course, do something really unethical and make someone into an AI. Why after all not work with the original, instead of reinventing the wheel? I would be a benevolent dictator (or so I think), but I'm perfectly disqualified. You'd need a robo-loving, non-partisan, talented administrator. They're hard to come by.
As for the cloning debate, I think you're looking in the wrong direction. South-Korea is world leader in cloning research, yet it has a growing christian community, which is not concerned with problems around cloning. That may change. But on the other hand, whatever comes out of SK, may force the US and Europe into a more, uh.. normal stance.

Last but not least, I still say you should start with a product not for utilitarian, but for recreational purposes. Why not let people play and fulfil wishes? Like Iain Banks says in his essay about The Culture (from his books), wish fulfilment is one of the highest functions of society and one of the most powerful motivators. Seems to me that people wish for less and less. That I find sad, since I have lots to wish for (you probably guessed)...

The comments to this entry are closed.