Wow! This post is particularly relevant to my life right now. On January 5th I start bootcamp, my first day in the military.
MMO of the future lol(some swearing)
And just so I'm not completely off topic, I agree with the original post. There should be games, they should be fun and challenging and require effort and so on. AI's definetly should not do everything for us. A friendly future is a nice place to live in and not a place wher an AI does the living for us so we might as well just curl up in a fetal position and die.
@ ac: I agree with everything you said except the part about farming a scripted boss for phat lewt in the future. One would think that in the future they could code something more engaging. Have you seen LOTR...
Does that mean I could play a better version of World of Warcraft all day after the singularity? Even though it's a "waste of time"?
What about a kind of market system of states? The purpose of the states will be will be to provide a habitat matching each citizen's values and lifestyle?
-Each state will have it's own constitution and rules.
-Each person can pick the state they wish to live in assuming they are accepted in based on the state’s rules.
-The amount of resources and territory allocated to each state is proportional to the number of citizens that choose to live there.
-There are certain universal meta-rules that supercede the states' rules such as...
-A citizen may leave a state at any time and may not be held in a state against his or her will.
-No killing or significant non-consensual physical harm permitted; at most a state could permanently exile a citizen.
-There are some exceptions such as the decision power of children and the mentally ill.
Anyways, this is a rough idea of what I would do with unlimited power. I would build this, unless I came across a better idea. In my vision, citizens will tend to move into states they prefer and avoid states they dislike. Over time good states will grow and bad states will shrink or collapse. However states could also specialize and for example, you could have a small state with rules and a lifestyle just right for a small dedicated population. I think this is an elegant way of not imposing a monolithic "this is how you should live" vision on every person in the world yet the system will still kill bad states and favor good states whatever those attractors are.
P.S. In this vision I assume the Earth is "controlled"(meta rules only) by a singleton super-AI with nanotech. So we don't have to worry about things like crime(forcefields), violence(more forcefields) or basic necessities such as food.
Um... since we're on the subject of disagreement mechanics, is there any way for Robin or Eliezer to concede points/arguments/details without loosing status? If that could be solved somehow then I suspect the dicussion would be much more productive.
"...what are some other tricks to use?" --Eliezer Yudkowsky
"The best way to predict the future is to invent it." --Alan Kay
It's unlikely that a reliable model of the future could be made since getting a single detail wrong could throw everything off. It's far more productive to predict a possible future and implement it.
Eliezer, what are you going to do next?
"I think your [Eliezer's] time would be better spent actually working, or writing about, the actual details of the problems that need to be solved."
I used to think that but now I realize that Eliezer is a writer and a theorist but not necessarily a hacker so I don't expect him to necessarily be good at writing code. (I'm not trying to diss Eliezer here, just reasoning from the available evidence and the fact that becoming a good hacker requires a lot of practice). Perhaps Eliezer's greatest contribution will be inspiring others to write AI. We don't have to wait for Eliezer to do everything. Surely some of you talented hackers out there could give it a shot.
Slight correction. I said: "Saying that an argument is wrong because a stupid/bad person said it is of course fallacious, it's an attempt to reverse stupidity to get intelligence." I worded this sentence badly. I mean that stupid people saying things cannot make something false and usually when people commit this fallacy it's because they are trying to say that the opposite of the "bad" point is true. This is why I said it's an attempt to reverse stupidity to get intelligence.
Basically when we see "a stupid person said this" being advanced as proof that something is false, we can expect a reverse stupidity to get intelligence fallicy right after.