Prerequisite reading which you will probably want open in another tab for reference: 31 Laws of Fun

Unprefaced, this post might sound a lot like I'm just picking on Eliezer, or Eliezer's particular set of "laws".  I'm sort of doing that, but only as a template for ways to pick on Laws of Fun in general.  The correct response to this post is not "Here is my new, different list of N things that will satisfy everyone".

(Well, it would be if you could do that.  I'm skeptical.)

If I purported to come up with general laws of fun, I might or might not do a better job.  Probably I'd do a better job coming up with a framework for myself; I might also be more cautious about assuming human homogeneity, but I doubt I'd do an unassailable job.  And an unassailable job is probably necessary, if everyone will abide by Laws of Fun forever.  An unassailable job of Legislating Fun is needed make sure that some people aren't caught between unwanted mental tampering and, probably not Hell, but a world that is subtly (or glaringly) wrong, wrong, wrong.

Please do not assume that I outright endorse unmentioned laws; these are just the ones I can pick at most obviously.

I fully expect to be told that I have misunderstood at least half of these items.

6 sits uncomfortably.  The savannah is where we were designed to survive, but evolution is miserly; it is not where we were designed to thrive gloriously.  (Any species designed to thrive gloriously there which was actually put there would find its descendants getting away with more and more corner-cutting until they found a more efficient frontier.  Creatures that can fly don't keep flight just because flying is awesome; they must also need it.)  I want a home designed for me to thrive gloriously in, not one that takes its cues from the environment my ancestors eked out a living in.  I suspect this is more like a temperate-clime park than a baking savannah, and it might be more like an architecturally excellent house than either.  "Windowless office" is not the fair comparison.  That is not how we design places to put people we like.

8 sounds just wrong, or like a misstatement.  Why should we get better and better?  Objecting to flat awesomeness sounds like a matter of denying that it is flat.  Perhaps it just sounds tedious for things to be the same level of awesomeness forever, perhaps it sounds inevitable that the hedonic treadmill will pull downward, but that tedium or treadmill means the awesomeness is not really flat.  If it's actually awesome, by all means let's take a flying leap there and carry on forever.  (Certainly things should not get worse over time.)


10 sounds like it's missing an option.  What can people do for each other?  In our world, buttons do stuff for us, because people with whom we are interdependent made them do that.  In the ancestral environment, people ate food that others hunted and gathered, they listened to music others played, they carried stuff in baskets others wove, etc.  Gifts are good.  Specialization is not evil.  Certainly humans should do things, but what if I want to play the flute now and only have flute-whittling down as my activity for century seventeen?

12 sounds okay provided it is not interpreted to forbid really great video games, roleplaying scenarios, fiction in general, etc.

13 clashes with a lot of the other laws.  (What if I don't want to live my life according to things like, oh, Law 9 [AAAAAH]?  Is someone going to stop me from planning myself a predictable future?)  People might work best under different rules, too.

15 isn't even a law, it's a problem statement.  Same with 16.

17 sounds like artificial difficulty.  Just because there is a challenge there and I don't have the road to get around it doesn't stop me from acknowledging that someone else does.  If there is an AI around, and it could get me out of this jam, a psychologically-untampered-with Alicorn will resent that it  is making me do stuff if I don't happen to want to do it, the same way I resented busywork in school that didn't happen to interest me.  Eventually I plan to try making my own butter.  In the meantime, I'm glad that's not a step in making scrambled eggs.

20 sounds idiosyncratic or case-by-case.  (I take it to be intended as a stronger statement than the mere "there are situations where you can tell people true things and it doesn't help them" - even if that's almost false there has to be some perverse scenario to make it so.)  Sometimes - often - I just want people to tell me stuff.  See also 17; the fact that someone else knows, even if nothing I can do will get them to tell me, makes my not-knowing something of a fake problem.

23 sounds like an outright contradiction of 22.  What do you want to do - solve problems by messing with the environment, or "nudge" people to solve a "statistical sex problem"?  Are there not any environmental changes that would accomplish this?  Condoms made a dent, didn't they?  What would literally perfect birth control and disease protection do?  More energy, better health, more time?  Literally perfect privacy?  Better information for everyone about how to be good in bed?  Non-twisted culture to raise new minds in?  Optional perfect body/avatar modification for everyone?  That took me three minutes to come up with.  Leaping straight to "nudges" is... discomfiting!*

27 and 28 are useful tools for fiction and interesting thought exercises, but it is not how we build houses (mine is right-side-up and has its plumbing in its bathrooms rather than on the roof, thank you).  It may not be how we should build eutopias in which we hope to actually live.  I like comfort.  I expect culture to change radically - the way cultures do when time passes and/or things change - once everyone has settled into transhumanity, but the form that transhumanity takes needn't itself be frightening, and trying to design this in sounds like a bad idea.

The general undercurrent through the laws "people should not have [access to] X" thing sounds problematic.  Unless we plan to deceive them about the nature of the postsingularity universe, their not being allowed X will be known to be somebody's fault.  Someone believed in some Laws of Fun that they programmed into an AI that determined that the best way to optimize for that kind of Fun was to disallow X.  Somebody is going to want X and somebody will be disappointed.

It is one thing to have a Eutopia that is scary, but it sounds so terribly sad to have one that is disappointing.

And I have never yet sincerely overestimated human heterogeneity.


*I think Eliezer in general assigns more-than-average importance to gender as a factor in personality, anyway... often in a heteronormative way.


156 comments, sorted by Click to highlight new comments since: Today at 8:21 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

To me, this post seems perhaps better-suited for the Discussion section. It's too long to be a comment but doesn't really strike me as necessarily meriting an independent mainpage post.

The post she is replying to is a Main post (featured, nonetheless). Do you think it should be converted into a discussion post? I just read both posts and I feel as though they did an equal amount to enlighten me in the nature of Fun. I don't see any good reason to privilege the visibility of one over the other.
Caveat: I am completely indifferent as to where this post goes. Eliezer's OvercomingBias contributions were taken as the seed posts for lesswrong years before 'Discussion' was in place. Taking one of the original posts and putting it in discussion would just be disruptive to the timeline. Leaving the OB posts be isn't a moral judgement that Eliezer's off the cuff declarations of what is fun are better than Alicorns. Heck, the style and tone are completely different. Alicorn really is just discussing Eliezer's post and writing things like "Aaaargh".
I don't feel that the original post should be converted into a Discussion section post; I believe it should be privileged because it stands alone, while this post does not. In general, I consider posting lengthy replies to Main section posts as their own Main section posts quite undesirable. Consider what LessWrong might look like if this became a common practice-- original Main section posts would be outnumbered by lengthy replies to them, making it difficult for new and interesting conversations to start.

SarahC: I do like the idea of living by your own strength somewhat though... a sense of helplessness and lack of skill is incredibly demoralizing.

Alicorn: I don't want to stop people from doing that; I just don't really find it appealing and object to enforcing it

SarahC: actually, I often think that for me and a large number of people I know, the biggest negative factor in our lives is awareness that we're not good at much.

Alicorn: I'm good at some stuff. To do that stuff, I like to use tools. I don't want to eat bugs for fifty years in a fake savannah while I work out how to fire ceramics so I can cook.

SarahC: oh. yeah. that's undeniable.

Alicorn: Alicorn-who-has-to-live-by-her-own-strength has to get PHENOMENALLY bored before she ever picks up a flute again. She sings, when she wants to make music, because she hasn't come around to finding whittling interesting yet. And this is sad for Alicorn-who-has-to-live-by-her-own-strength.

SarahC: agreed.

I believe that one is supposed to complete a singing quest so as to gain access to flutes.

This was probably a joke, but: This strikes me as not much better. It's still "I have to do something largely unrelated before I can do what I actually wanted to do." Imagine you had a flute-playing competition, but only those who were also good singers could enter! The best specialized flute-players might not be able to enter at all.

Then you don't have to sing. But you do have to do something else. Or if all you want to do is play the flute, then you get a flute at the beginning and all your other rewards come from flute-playing. The rules of the universe can be adapted to individuals without being chosen by individuals.
I adore the phrase "gain access to flutes" and I have no idea why.
This one time in a band camp... []
I think it's about time I actually watched that movie.
Alternatively, visit the ancient order of music monks who, after completing a series of arbitrarily difficult challenges, will reward you with a flute.
What if the monks fail to complete the series of arbitrary difficult challenges?
This mostly sounds like loss aversion versus current abilities. The proposed temporary states don't seem objectively that bad! Would Alicorn-by-her-own-strength having to cook as she does now rather than eat at nine star restaurants, or play music on a flute rather than be given access to future, superior instruments, sound so bad?
I can see what you mean by 'objectively bad', but don't see why anyone should care about the concept. Also, I think that you have too low of expectations for a utopia -- both of your sentences were pleas that such a state of being would be acceptable, but I don't think that's what we should be going for here. Compromises can be made later if and only if they must.
I'm not that interested in how "objectively bad" eating bugs for fifty years is. (Let alone spending centuries domesticating my own bananas and setting up my own deep sea fishing expeditions.) I don't think they're reasonable prerequisites. I cannot decipher your last sentence; please rephrase.
Instead of comparing eating bugs to eating modern food, compare eating modern food to eating futuristic super-perfect food. The difference is roughly comparable but the latter may be more emotionally accurate.
More emotionally accurate how? Is the idea that we get to keep our existing tools and only have to make new ones ourselves? Or that I shouldn't feel as grossed out by eating bugs as I do so here's something not-gross to think about?
More the former, at least for those of us from Old Earth for whom losing our standard of living would be traumatic, but also the latter in that eating bugs wouldn't be gross if you were used to it and also eating actual genetically-uncharted plants that were grown in actual biological dirt might look just as disgusting from a far-future perspective.

I doubt I'd do an unassailable job. And an unassailable job is probably necessary, if everyone will abide by Laws of Fun forever.


Unless we plan to deceive them about the nature of the postsingularity universe, their not being allowed X will be known to be somebody's fault. Someone believed in some Laws of Fun that they programmed into an AI that determined that the best way to optimize for that kind of Fun was to disallow X.

One point of that sequence is that typical suggestions for Utopias are abominably flawed, that there are certain things that could be done much better, and that people don't spontaneously notice these flaws upon hearing a description of a Utopia. The sequence draws attention to certain problems of human judgment, and trains you to notice such problems whenever you see another proposed "Utopia", makes you less gullible.

It is emphatically NOT a suggested set of rules for an AGI to enforce, indeed one of the arguments that could be drawn from that sequence is that trying to manually construct such rules is a bad idea, that lists of issues such as that sequence or one given in this post will inevitably follow from any simplistic ruleset. You don't program rules into an AI, you program a way of figuring out what the rules should be (and those rules have to get down to the level of saying how to arrange atoms, so won't have much to do with verbal descriptions of human condition).

This isn't a very strong article. It belongs in Discussion.

Both this post and the one linked seem to be both about fictional utopias for literature, and actual optimal future utopias. These are completely unrelated issues the same way good fictional international conflict resolution is WW3, and good real world international conflict resolution is months of WTO negotiations over details of some boring legal document between 120+ countries.

Care to provide more than an argument by analogy to support that? What are specific mistakes you suspect fictional utopia designers would make? By the way, if this is really true then instead of designing Fun Theory, we should work on Eutopia Interim Protocol, which could be something like everyone getting split into lots of little parallel universes to see what seems to be the most fun empirically. Or things change slowly and only by majority vote. Or something like that.
Total number of hours per lifetime people in every literally utopia ever printed spend watching videos of kittens doing cute things: 0. Total number of hours per lifetime people in any real utopia would want to spend watching videos of kittens doing cute things: 100s or more. Anecdotal evidence: Have you seen internet? More seriously, Internet shows a lot about what people truly like, since there's so much choice, and it's not constrained by issues like practicality and prices. Notice total lack of interest in realistic violence and gore and anything more than one standard deviation outside of sexual norms of the society, and none of these due to lack of availability. When people are given choice of just about anything (to watch or read that is), they prefer to watch cute things, and funny things, and stories about real and fictional individuals, and factual information about the world (Wikipedia), and connect with people they know etc. This is all so ridiculously mundane no self-respecting utopia writer would ever get near these things. (and by historical standard of what humans lived like for 99% of their existence, modern society counts as an Utopia already)

More seriously, Internet shows a lot about what people truly like, since there's so much choice, and it's not constrained by issues like practicality and prices. Notice total lack of interest in realistic violence and gore and anything more than one standard deviation outside of sexual norms of the society, and none of these due to lack of availability.

Eh? Total lack of interest? Have you ever been on 4chan? Realistic violence threads crop up regularly over there, and it's notorious for catering to almost any kind of sexual deviance the average person can think of. (Out of curiosity: what would you consider "more than one standard deviation" outside the sexual norms of the society? How about two?) I say almost, because 4chan is regulated and it isn't the go-to place for quite everything; child pornography nets its posters permabans pretty quickly and it doesn't have the dedicated guro boards of its Japanese counterpart. Which is to say nothing of blood sports like traditional bullfighting or cockfights, for which even a quick search on YouTube can offer some clips (relatively mellow and barely containing any actual blood as they are).

Stuff like that may not match the t... (read more)

I actually know various chans quite well, and they all pretend to be those totally ridiculous everything goes places, but when you actually look at them >90% of threads are perfectly reasonable discussions of perfectly ordinary subjects. Especially outside /b/. This generated far more interest on 4chan than all gore threads put together [].
That's still not the same thing as a "total lack of interest."
Would you expect that people who use the Internet more also tend to be happier?
That's a difficult question to answer since amount of Internet use correlated with age, wealth, education level, location, language used, employment status, and a lot of things which might have very big impact on people's happiness. I could give the cached answer that "if it didn't make them happier they wouldn't be using Internet", but there are obvious problems with this line of reasoning.
Especially since the chached answer fails to explain addiction, which is quite possible with the internet

I think the criticism of 6 is a misunderstanding. It doesn't say "the world resembles the ancestral savanna", it says "the world resembles the ancestral savanna more than say a windowless office". The best environment is unlikely to be anything like the ancestral savanna, but it's likely to be closer to that than to a windowless office, in terms of sensory experience. The point I think is not the specifics of the environment, but that it engages with our bodies and senses in a way that we, as evolved creatures, find satisfying, and in a way that the purely mental stimulation available in the office does not.

That's what I took away from the linked post.

I'm really not sure I'd even rather live in something that was just "more like" a savannah than a windowless office. Offices usually have stuff in them, and... air conditioning.
Making bland air that doesn't taste of anything, except perhaps paper dust. Nobody's saying you can't temperature-control the place; if you don't like the literal savannah, how about a nice temperate forest on a summer's day in European latitudes? With enough green stuff to give the air freshness, and the occasional animal? I suggest that you are not thinking specifically enough.
Hmm, there seems to be a general communication problem here. Saying "more like X than Y" will only succeed at communicating, if others share your intuitions about the most salient differences between X and Y.
I doubt that air temperature was what EY was alluding to. The “From Cro-Magnons to Consumers” parable (p. 2) and “The Natural-Living Test” (p. 331) in Spent by Geoffrey Miller make the point I think EY was trying to make much more clearly. (And even wrt air temperature, don't fall for the typical mind fallacy. I mean, all those people going on summer holiday in Florida can't be all masochists, can they? My mother's favourite temperature is at least 5 °C higher than mine.)

This should be moved to Discussion I think.

In general, the post was about trying to design utopia. Both the real world, and most proposed utopias, fail miserably at both all of these things and your rejoinders.

Eliezer's world has small happy surprises. The current one has "Surprise! Your friends are all dead." While the "Laws of Fun" might need to be replaced entirely, something like Fun Theory really ought to exist.

IIRC, the post was about designing fictional utopias, and a quick re-scan seems to confirm that. OTOH, the line between fiction and reality is somewhat blurred throughout the Fun Theory sequence, so perhaps I'm remembering incorrectly, or missed the point in the first place. In any case, I consider the difference important. Actual utopias, unlike fictional ones, don't have to seem compelling or even coherent to outsiders. That creates a whole different set of constraints. If I interpret Eliezer's post as the template for an actual future, I agree with you that it beats having everyone I care about suffer and die (I felt the same way about his "failed utopia"), but I also share many of Alicorn's objections.
As I understand it, the series (and that post) was about real utopias, with some notes about fictional utopias for reference, comparison, and writing tips.
Fair enough. WRT real utopias, I expect I would find the prospect of living in one enormously disquieting, should the offer be made, but that I'd be both happier and better off living in one (should I choose to, or be denied a choice) than I am now.

I've never experienced that disquiet, and it worries me. I read Brave New World and while the unnecessary torment of the Gammas and Deltas bothered me, my only thought about the lifestyle of the doped-up, lavishly entertained, socially and physically secure Alphas and Betas was "Awesome! Can't wait." Same thing with the Experience Machine -- if you really could lock everyone who wanted one into their own personal optimal-experience simulator, where the only catch was that it wasn't "real," and we stipulate that the machine works as advertised, I'd sign up without much hesitation.

If somebody in real life offered that kind of opportunity, I'd turn it down, because we can't really be sure that it would work as advertised...but if we really think that life in one of these utopias would be an improvement over the status quo, what's to find disquieting?

Sure, I can understand that, and I can understand being worried by it. Me, I frequently react more or less this way to the prospect of dying, and that worries me sometimes too. Not sure what to say about it besides "yeah, brains are complicated."
Can I ask you a hypothetical? Say you're Abe Lincoln, and you're planning to free the slaves (or insert your favorite historical example of some profoundly good action). Now, suppose you have good reason to think this will be very difficult and involve a costly war. But someone has recently built an experience machine, and you're sure that within the experience machine you could free the slaves without any trouble: an easy and bloodless emancipation would make you happier, and that's what the experience machine will get you. So on the principle that you should take the most efficient means to your end, should you just go into the experience machine for life, and declare the (virtual) slaves free? Or is this not actually a means to your end?
Presumably that's where the "everyone who wants one" caveat comes in. Most people aren't Abe Lincoln and, as far as I can tell, don't really care whether their actions have any significant effect on people outside their social circle. They probably wouldn't care too much about that social circle being real vs. simulated, either - as much as I like the individuals in my current social circle, if I was starting from zero rather than replacing them, I wouldn't mind ending up with a fully simulated social circle so long as it was similarly engaging / persistent / etc.
Well, how would you answer my hypothetical? And suppose I rephrased it thus: your friend needs help say, getting through a painful divorce, and you knew that this will be a difficult process taking many years. But you also know that if you put yourself in an experience machine for the rest of your life, you could soothe your (virtual) friend's wounded soul in half an hour. Supposing the move to the experience machine doesn't interfere with any of your other plans (they could be simulated too, of course), would you consider the experience machine simply a more efficient means to your end? Or would it fail to achieve your end at all?
I would consider my goal not accomplished at seems to be one of the basic tenets of my value system that other people exist and one other person is just as valuable as me, therefore one of my responsibilities in life is to help other people. I am very, very leery of futures where I would end up in a virtual world–if they're simulated deeply enough to be sentient beings, I would care about them as well, but I would be abandoning any chances of influencing the fate of everyone else. I think maybe I could justify it to myself if I knew in advance, and could prove to myself, that everyone else was also ending up in a virtual world where they would be even happier... But the idea still makes me uncomfortable. I think it seems like 'cheating at life', somehow, taking the easy road out. Although that's probably a random emotional prejudice more than a logical objection.
I think this is exactly right. While I share your lack of understanding of exactly why the machine would be unsatisfying here, I don't think it's a random emotional prejudice. I think we're on to something.
That is sort of an illogical question. What it boils down to is "Is your goal to feel like you are helping somebody, or is your goal to help somebody whom you are actually emotionally attached to?" If I agree with the former, then aside from realizing I have some pretty vapid and pointless goals, I'd get in the machine. But if I had the clarity to realize my part in that goal system, helping others probably wouldn't be high on my list of things to do. I would get in without a glance backward, and start thinking up something interesting. If I genuinely believed that I cared about them and got in the box anyway, do I really qualify as a sentient being? The machine might as well be a meat-grinder in that case. If I genuinely cared about the person in question, I would realize that with me inside the machine, he would still be suffering, and my social programming would not easily allow me to deviate from the "right thing to do", and I would refuse to get in.
If my actual friend is actually hurting, my goal is to actually fix that; a simulation of the individual isn't relevant to that. But I don't care very much about people who aren't my friends, in most cases, so if it's a choice of becoming friends with real person Alex, who might get hurt and need support that I can't give, or simulated person Abe, who won't present me with any problems that I can't actually solve, I might well choose Abe, so long as Abe is as interesting as Alex in every other way.
My point was just that we would resist the experience machine if we took ourselves to have ethical obligations or a chance to do something good in the world we live in. 'Real' isn't quite the issue here, since if you started in the experience machine you might justifiably want to stay there instead of moving to another one or into the real world. In other words, over and above our experience of the world, the world we've been living in has a basic ethical importance for us. We wouldn't give it up for just anything.
With my friend's permission, I'd rent a two-seater experience machine for a month or so, sit myself down in the administrator position, and use it to play through various scenarios calculated to be useful for my friend's psychological well-being. What the hell kind of a utopia only has a holodeck with a one-way door?
Don't fight the hypothetical!
Fighting the hypothetical is a legitimate tactic when there's a contradiction in the hypothetical premises. In this case, we're assuming a world where people have learned to create perfectly immersive virtual environments, but somehow forgotten how to charge money for valuable services or build a power switch that works on a timer, which seems contradictory based on what I know about technological development.
That's irrelevant to the question (hence, a case of fighting the hypothetical). Mass Driver said he would enter into an experience machine permanently (that's how I took the word 'lock') without much hesitation, if the machine-world were better but unreal. The purpose of my hypothetical was to show that while 'real' isn't quite the issue, there is something about one's own world that we're ethically attached to. And we're attached in such a way that an experientially identical world which is different only in being not our original world, is for that reason significantly different.

my modification to the reedspace lower bound: your utopia should be at least as good as the one I could give myself with a star trek holodeck.

27 and 28 are useful tools for fiction and interesting thought exercises, but it is not how we build houses (mine is right-side-up and has its plumbing in its bathrooms rather than on the roof, thank you). It may not be how we should build eutopias in which we hope to actually live. I like comfort. I expect culture to change radically - the way cultures do when time passes and/or things change - once everyone has settled into transhumanity, but the form that transhumanity takes needn't itself be frightening, and trying to design this in sounds like a b

... (read more)
Oddly enough, there are some people who aren't particularly interested in learning or becoming stronger. Would your preferred radical change sit well with them?
I wasn't claiming that the future should be scary and different because of my personal preference...any more than Alicorn was claiming that her personal preference for no surprises should determine the future for everybody. I was really just pointing out that although I agree with Alicorn on a lot of things, this is a particular area where we are different, probably more because of personality than values. I guess this was more my point: there's a wide variance in terms of human preferences for novelty vs familiarity, and I'm far from the novelty extreme. Any future that doesn't take that into account is going to make someone unhappy–well, either overwhelmed and terrified or really bored, depending on which side of the continuum that future ends up on. But given that even in today's world, it's possible for at least some people to choose the level of risk/novelty/scariness they want in life, hopefully it shouldn't be too hard to tailor a eutopia in that sense, either.


Sorry, Alicorn, but you really are something of an anomaly here...

Thing is, people who are anomalous in some direction aren't particularly rare. Alicorn's specific anomaly might not warrant a rule-rewrite on its own, but the meta-point that all of these rules might be able to evoke 'AAAAAAAH' from some subset of the population does seem like an important one.

And even if any deviations were genuinely rare, the system should be flexible enough to accommodate the unusual people.
I agree, though I would consider a system that prohibits certain specific deviations from existing in the first place (chronic, non-situational depression, for example) to at least not automatically not be a utopia.
Good point.

Perhaps, but she's far from alone. I'm mostly with her on this one; letting people live in ignorance we can cure just so they can appreciate knowledge more when it's eventually obtained makes about as much sense to me as letting them suffer from illness we can cure just so they can appreciate health.

It seems to me that Elizier's post was a list of things that typically seem, in the real world, to be component of people's happiness, but are commonly missed out when people propose putative (fictional or futuristic) utopias. It seemed to me that Elizier was saying "If you propose a utopia without any challenge, humans will not find it satisfying" not "It's possible to artificially provide challenge in a utopia".
Sure, at that level of abstraction, we're all in agreement: challenge is better than the absence of challenge. The question is whether this particular form of challenge is better than the absence of this particular form of challenge. Just to make the difference between those two levels of abstraction clear: were I to argue, from the general claim that challenge is good, that creating a world where people experience suffering and death so that we can all have the challenge of defeating suffering and death is therefore good, I would fully expect that the vast majority of LW would immediately reject that argument. They would point out, rightly, that just because a general category is good, does not mean that every instance of that category is good, and they would, rightly, refocus the conversation on the pros and cons, not of challenge in general, but of suffering and death in particular. Similarly, the discussion in this comment thread is not about the pros and cons of challenge in general, but of ignorance in particular.
I agree with you (and Alicorn), but "AAAAAAAAAAAAAAAAAAAAAAAH" doesn't make for a very strong argument.
In this particular context, "that sounds like something I wouldn't enjoy at all" is a reasonable argument, since the whole point is to set up a world that's optimally enjoyable. "AAAAAAAAAAAAH" is just the extreme form of that argument.
Yeah, I get that, but I was under the impression that Alicorn was saying not merely, "I personally wouldn't enjoy that at all, YMMV", but "I wouldn't enjoy that at all and neither would most other people". I could've been reading too much into her statement, though.
I'm pretty sure the more accurate actual-words form of the argument is more like "that would be torture for me" than "I think most people would prefer that not to happen". Sufficient dust specks might be > torture, but unlike dust specks, torture never belongs in a utopia.
Alicorn hates surprises, and I've never known her to assume that this means everyone else, or a lot of other people, must also hate surprises.
Fair enough, I apologize for reading too much into her words.
Taboo “surprise”, perhaps? I wouldn't like to already know all the sensory inputs I'm going to receive in the next month, but maybe Alicorn is interpreting “surprise” according to a more narrow definition. (Though Eliezer does seem to value surprise more than some people here -- e.g. his aversion to non-rot13'd spoilers.)
I also would not like to know all of my sensory inputs in advance. I don't actually believe that condition is coherent. That said, I would also not like to know that an accurate prediction of all my sensory inputs for the next month is sitting in a file somewhere that I am not permitted to see, even though in that case all of those inputs would come as a surprise.

If I understand this post correctly, the main thesis seems to be something along the lines of "different people should live by different rules, with rulesets locally optimized for the person". Fine, but there still has to be some sort of metaruleset by which local ordinances are chosen. And, per 17/18, user customization is not necessarily the right choice here.

Upvoted for content (although the piece could be made more readable if it's going to be on Main). You might also post this as a comment on the "Laws" post, or at least a shorter version with link.

7 is the one I have the biggest problem with. The opposite of happiness is sadness, not boredom. Anyone who says otherwise fails at opposites, and should probably retake the first grade.

Yes, happiness and sadness are both sides of the same coin, as are love and hate, but they are sides that are as far away from each other as it is possible to be on that coin, and anything not on the coin isn't pertinent.

Opposites are not things with as few commonalities as possible. Black has more common with white than it does with ketchup, but that does not make black a... (read more)

(I was scooped by Wedrifid, but here's my phrasing.)

The opposite of happiness is sadness [...] Anyone who says otherwise fails at opposites, and should probably retake the first grade.

I say otherwise. Specifically, it seems to me that the opposite of happiness is a type error; not everything has to have an opposite, and most things don't. The purest examples of what we call opposites have a duality to them, a one-dimensional-ness: it makes sense to speak of hot and cold as opposites because temperature has an order relationship to it; if the temperature is changing, it's either getting hotter or it's getting colder, but not both or neither. Similar remarks could be made about left and right, or the boolean values true and false. But most useful concepts are way too complicated for this kind of duality to apply: what's the opposite of blogging? What's the opposite of graphite? What's the opposite of Vernor Vinge? These questions simply have no sensible answer.

Oftentimes we like to contrast two different things, but this should really be kept conceptually distinct from those things being opposites. I would expect your star first grader, when queried about the opposite of cat, to ... (read more)

The opposite of cat is clearly not dog, but rather cow.

After all, we don't eat dogs or cats, but we do eat cows.

And you can keep a dog or a cat in an apartment, but you can't keep a cow. Also, cows have horns; dogs and cats don't.

Also, they revered cats in ancient Egypt and they revere cows in modern India; ancient and modern are opposites; and Egypt and India are on opposite sides of the navel of the world.

Wow, I think we've invented a new version of the Rationalization Game.

Once I have spent a train commute with my former colleague discussing what was the opposite of "cow". I don't remember what answer we finally agreed upon, but it certainly wasn't "cat". (However I can't rule out it was "vodka" or "to be".)
It seems to me that an important part of a consistent definition of "opposite" is that the opposite of the opposite of something is that thing. So, if the opposite of happiness is boredom, then the opposite of boredom must be happiness -- not sadness, not interest, not love. By extension, there can only be one set of equivalent things that is the opposite of any one thing. Poets might use the term in a looser sense, but I don't see how programmers or AI philosophers can and still retain useful meaning.
Can you clarify how any meaning of "opposite" is useful to programmers or AI philosophers?
I suspect it retains some use in some situations, but I also suspect those tend more towards the less-formal context like the original post here. I'm tempted to say I don't see a specific use in highly technical discussions, however I have low confidence in my ability to make statements like that accurately. (And I think that if I thought about individual words without any context, I'd end up scratching so many off the list as to render discussion rather difficult.) In other words: no, I cannot. On noticing this, it seems like we should taboo the word.
I'm not a programmer by any means, but it seems to me that it might indeed be useful if an AI can generalize from following directions where you want to go right at an intersection to reach your destination, so [opposite] that to left on the way back, to situations where you accelerated to a hundred meters per second to reach your traveling speed, so [opposite] that to decelerate to a stop so you don't crash at your destination. Or if you heated a sample up by 50 degrees celsius, [opposite] that by cooling it 50 degrees celsius to reach the original temperature. Or, if it can quantify a person's happiness, then if they lost 50 hedons when their puppy was run over, then [opposite] that to bring them back to their previous satisfaction level.
Not with those words, but I'd assign a probability more than 5% that a randomly-chosen first-grader would answer something to that effect. (On the other hand I seem to recall homework questions I was assigned in elementary school which assumed that the antonym of sweet is bitter¹, which it clearly isn't -- water is neither and dark chocolate is both.) -------------------------------------------------------------------------------- 1. Actually the Italian equivalents thereof, but I can't think of any major difference between their denotations or connotations.
I am willing to concede that there is a meaningful sense in which happiness and sadness each do not have meaningful opposites, although for most purposes I think it's more practical to treat them as opposed. But I deny that there is any sense in which boredom is the most appropriate "opposite" for happiness, or that dismissing an opposite candidate as being "another side of the same coin" is a creditable insight.
No. I am quite capable of telling my first grade teacher the verbal password "sadness" as the opposite to "happiness" but I have no particularly good reason to declare the two fundamentally opposite psychological states, nor any particularly good reason to believe that the conventional wisdom taught by my first grade teacher constitutes sound neurobiological insight.
This doesn't constitute a refutal, but when I'm in a state of boredom, and don't feel that I can achieve happiness right then and there, I will often seek something out to make me sad, or at least angsty. I think this is partly because my brain has a much greater tendency to get "stuck" in boredom than to get "stuck" in sadness... Something about the intensity of feeling sad makes it just more interesting than boredom, and I can pop out of it pretty easily. (I did meet someone who was surprised that I did this, since he found his brain had a greater tendency to get stuck in states of sadness that would become depression.) On a more general note, whether or not happiness/sadness or happiness/boredom are opposites is based on how you define the word 'opposite', and I don't think it's necessarily pertinent to deciding which one is the 'greater enemy.' Sadness and boredom are both signals that something is wrong with your current situation and you need to start behaving differently to fix it. Sadness is probably the more urgent signal, in that it tells you something about your situation is actively deteriorating or has been permanently broken, whereas boredom is just a signal that things are okay currently but you should probably be exploring a wider range of options. I would like to be capable of experiencing both, because just like humans incapable of feeling pain are pretty dysfunctional, anyone incapable of either sadness or boredom would likely find all of their decisions affected by it. Shutting off the negative-emotion signals doesn't achieve happiness anymore than shutting off pain receptors instantaneously achieves pleasure.

[Meta] Quick quibble about presentation, it would help if you quoted or summarised the laws rather than just the number, I'm flicking between tabs which is disorientating. [Otherwise good article.]

10 is just easily improved. It is good to be able to do for yourself, but it is also good to sometimes do for others and to let others do for you. If you don't fully understand that statement, go to burning man -- this is the first and best thing it can teach you.

We have already established that I should not go to Burning Man.
"You" == "reader". I think you already understand the thingy I was describing.

A lot of people value making things, but the type of things I want to make is different from the type of things I want to use. Many people want to make art of various kinds, fewer would value making disposable plastic packaging. There is also the fact that if a non expert is making something for the fun of it, there is a good chance it just won't work. I might want to play a functional flute despite having made a flute that sounds like strangled cat. And if you want to build anything high tech, you can't easily work your way up from raw materials. Many people would like to do some programming without having to build microelectronic components.

People want change, and they want to have an effect on their world.

A utopia that actually satisfies people is going to change, so any particular vision of utopia is of a temporary state.

A lot of people also become very fond of things they've experienced, so a utopia will include retro-utopias.

I suggest that a utopia has to have resilience built in so that it can recover if it starts turning into something its members don't want, and that this is worth speculating on, rather than just listing the good things you want in a utopia.

Suggestions for a resilient utopia?

I always consider this to be The blunder Eliezer has ever made, and beneath that worried that it might not and that any words that'd not be hell to everyone except me would be hell to me. Thanks for releasing that worry a bit.

[This comment is no longer endorsed by its author]Reply
Could you clarify this sentence? Fixing the typos/editing-mess would make it easier to be sure of the content.
I must have been very tired when I wrote that, I don't know either.
Armok means that he disagrees strongly with Eliezer on Fun Theory, and is/was worried that he would not at all enjoy a eutopia produced by human CEV that everyone else enjoyed. I know Armok means this because I talked to him IRL. EDIT: pronouns changed from gender-neutral to "he" because apparently gender-neutral pronouns get on some people's nerves. Apologies to anyone I annoyed.

I've never been convinced that gender neutral pronouns actually fulfill the underlying function of language -- namely, to communicate. Especially with so many competing standards. Personally, I use the singular 'they.' I mean, yes, it's not technically correct, but people understand you.

Especially with so many competing standards. Personally, I use the singular 'they.' I mean, yes, it's not technically correct, but people understand you.

And, after all, 'em eir ey' aren't technically correct either. On account of not being words.

I have mellowed in the last year or so. I no longer downvote every comment that uses that kind of language. They no longer have the same close ties with an abhorrent (local) political agenda so I can now consider them more or less acceptable.

EDIT: Most unexpected significant and rapid downvoting of one of my comments ever. I retract it, including the downvote policy change - I have returned to considering the subject as distasteful politics.

Upvoted wedrifid because the original downvoting was weird and silly. I think the downvoting policy he returned to is similarly silly, but at least had a coherent motivation.
I must admit it is certainly the most whimsical of my downvoting policies.
Ok, my reception-of-comment predictions are way off now. I expected the parent comment to end up more downvoted after the preceding edit, not less (it expressed intention to follow a voting strategy that some may not like for reasons that some would not follow). Instead it went from -5 to +4. Unfortunately the voting strategy mentioned (and the underlying preferences) actually relies on my model of people's motivations when using "ey"s. Since my model of behavior patterns in the context are unreliable my strategy in responding is not clear. I'll have to go by whim and intuition on a case by case basis!
Disclaimer: I'm strongly in favor of gender neutral language. I do personally use Singular They because it's the least obtrusive and most correct, but consider ey/eir/em to be a decent alternative (they feel like the most natural schelling point to me if you were going to pick a new word, and tend to be read as typos or completely glossed over during the interim period where they'll still gaining traction, whereas xir/zer/whatever just look weird) In general I consider gender neutral language better than no gender neutral language even if obtrusive and have little sympathy for people who consider it obtrusive. Yes, it's a pet political peeve on mine that makes me okay with this, but it's pet political peeves of the pro-english-status-quo-folks that make it a remotely big deal in the first place. Less Wrong is the one place where I'd consider altering this perspective, because we make a genuine effort not be political at all, and whether I like it or not, it IS a recurring consequence of gender neutral language that someone makes a big deal out of it. (Case in point, this thread) I place most of the blame of this on the people complaining about it, but it is what it is. (I am personally annoyed whenever someone uses "He" to describe someone who turns out to be female or gender-nonconforming. I don't have a consistent policy on how to respond to that but I'd accept blame for arguments that happen because I made a big deal out of it). But your downvoting seemed incredibly weird, especially without anyone clarifying why they did it in the first place. Pro-gender-neutral-folks shouldn't have downvoted you for having changed your policy. Pro-status-quo folks who are upset at you for "having mellowed" would a) strike me as INCREDIBLY ridiculous, b) really should have explained their motivations if they wanted to punish your defection in a meaningful way. So the downvoting was either dumb or deliberate trolling. Presumably the subsequent upvoting was by other people sym
I am persuaded. "Ey/eir/em" are Cool not-quite-words. Analytical tangents are also cool. Your parent was not especially political - at in particular it was only directly in favor of the new words rather than abusing those words to push a different agenda. When divorced of any other connotations a simple word preference is not especially dramatic. My best guess was that it was people trying to punish me for acknowledging that I formerly had that policy (and the usual two or three downvotes that I expect most of my comments to get from people I have pissed off recently - perhaps a couple from Clippy). But, as I noted, my model of human behavior in the context was completely broken so I had little confidence in that prediction. I would have loved hearing that if that was the case.
Neologisms are still words. EDIT: As wedrifid implies, words are strings of characters with socially established meanings. Just because he doesn't belong to the social group that uses those words to mean those things doesn't mean they stop being words. It'd be like saying {klama} isn't a word merely because only around a thousand people or so have ever used it to mean "go/come."
Sure, ok. "Not words in this particular established language". Arglebargle witzot phlerg.

You mis-spelled "flerg".

Must have missed my edit where I explicitly mentioned social groups. Also, see Wittgenstein's comments on private language.
No. Nor is that conclusion suggested by my reply.
What was your reply supposed to suggest?
It depends on what you mean by “technically correct”. It's been in use for at least four centuries.
Every time I see the word "ey", I can't help but think of Fonzie. Eyyyyyyy ! Anyway, I prefer using "he" or "she" as the gender-neutral pronoun, despite the fact that neither of these is actually gender-neutral. The singular "they" grates on my nerves like an unclosed bracket. I am probably in the minority on this, though.
Singular "they" has been around for centuries longer than you have; you may as well get used to it. []
Lots of ideas have been around for centuries longer than myself, but this doesn't necessarily mean they are good ideas. Just to clarify, I did not mean to imply that the singular "they" is grammatically incorrect; merely that it is, in my personal and completely non-authoritative opinion, bad style.
You could choose to change your mind about that; and studying the actual history of the language might help.
Um, I don't want to change my mind just to change it, I want to change it to make my current beliefs less wrong (tm). I agree that studying the actual history of the language could help, but I'm more concerned with present-day usage as a practical matter, than with the historical perspective regarding the evolution of languages in general, and English in particular.
Whether it's a good idea to utter a particular sentence “as a practical matter” depends on what your listeners will think upon hearing it, which depends very much on what set of linguistic inputs they've been exposed to so far and very little about any purported stone tablets in the sky determining whether something is good style regardless of what any users of language do.
I wasn't talking about any kind of prescriptivist stone tablets, just my own preference. In my experience, which may not be representative, the gender of a person or people I'm talking about matters a lot less than the number of people, most of the time. Thus, sacrificing gender recognition fidelity is a good tradeoff, most of the time.
On the other hand, in English the number of people is usually already encoded in the antecedent of the pronoun, whereas whether the gender matters isn't usually encoded anywhere else.
You're going to need gender-neutral pronouns anyway, since not everyone is a man or a woman. Might as well use the same pronouns for people of unknown gender.
The difficulty of introducing new pronouns into English isn't just political reaction. Linguistically, pronouns are a closed word class [] in most European languages — unlike in, say, Japanese. Closed classes don't change much, unlike open classes such as nouns and (in English but not Japanese) verbs. That said, there is apparently a strong and somewhat popular movement to adopt a gender-neutral pronoun in Swedish [] . Closed classes can be changed; it's just rare.
Whether verbs are a closed class in Japanese is largely a matter of perspective, I think. I'm pretty sure something like "janpusuru"(ジャンプする) is considered a single word. Am I wrong?
The claim that I've read and heard from linguists about this is that while words like janpusuru are semantically verbs, grammatically they are a noun janpu + the standard verb suru. Contrast the English expression "I am doing homework" vs. "I am *homeworking". "Homework" isn't really used as a verb in English, but we can express the idea of homework-as-an-action by saying "do homework". New non-suru verbs in Japanese do apparently happen from time to time (Wikipedia uses the example of guguru — "to google") but they're rare, so the class is mostly closed.
That makes good sense.
Why did you use such a pronoun in the first case, if you're talking about one specific person whose gender you know?
Presumably they didn't know Armok's gender at the time. (If they did I agree it's silly).
Alternately, perhaps they knew Armok's gender but not whether he'd chosen to disclose it to the group.
Confirming this. Also, I like gender-neutral pronouns and have seen them used on here before--I didn't think it would cause this much argument.
Confirming this.


You've never enjoyed a pleasant surprise? I can imagine not being pleasantly surprised at largish changes in your life, like where you live, work, have relationships, etc. But what about littler things? Like reading a book you thought would be good, but have it turn out to be GREAT instead? Or finding a five-dollar bill on the ground?

I think maybe a good corollary to 9 might be "Different people appreciate different scales of pleasant surprises. Some people will be delighted to find out they can switch to an awesome new career. Other would prefer something smaller-scale, like finding a little money on the ground. Adjust accordingly"

Actually, going back to Eliezer's article expanding rule 9, Justified Expectation of Pleasant Surprises, he imagines two possible worlds:

But in one world, the abilities that come with seniority are openly discussed, hence widely known; you know what you have to look forward to.

In the other world, anyone older than you will refuse to talk about certain aspects of growing up; you'll just have to wait and find out.

I ask you to contemplate—not just which world you might prefer to live in—but how much you might want to live in the second world, rather than the first. I would even say that the second world seems more alive; when I imagine living there, my imagined will to live feels stronger. I've got to stay alive to find out what happens next, right?

I take the first option.

My problem with the second is that the real world contains enough surprise already, without having to add artificial, fake surprise. I've got to stay alive to find out what happens next, anyway, and I can do without UFNIs sticking their oar in and treating me like their pet cat. ("Oh yes, you do like a surprise, don't you, I just know you do, yes you do, it will be a wonderful surprise yes it will etc.etc.")

Though the context dampened it, I was surprised to hear that anyone at all would even slightly prefer the second option. I think this may constitute the first time that I've felt a serious and noticeable disconnect with Eliezer.
I think you're fighting the facts. I don't know Alicorn, but the message I get from what she has posted is that for her, the optimal size of surprise is zero, and that offering her just a tiny little wafer-thin surprise is missing her point.
I actually talked to her about it, and while I'm not sure she'd agree with my assessment, the issue seems to be something like type, not something like size. We did establish that finding unexpected blueberries at a farmer's market would be considered a good thing, though that definitely doesn't extend to being given an unexpected gift, even at a culturally-expected gift-giving time.
Is that because they're blueberries, or because they're unexpected? It may be that the surprise of unexpected blueberries is bad, but the blueberries themselves are good, so that "expected blueberries" > "unexpected blueberries" > "expected lack of blueberries" > "unexpected lack of blueberries".
Your ordering is correct. The "unexpected blueberries" kind of surprise doesn't bother me as much as what I think the surprise-related law [AAAAAH] is getting at because the space in which blueberries are to be found is known to me and unrestrictedly open for my inspection. I also don't mind books being presented linearly if I can look on Wikipedia for the ending whenever I want, presents being wrapped under the Christmas tree if I have a reasonable expectation that I will get most of the things on my wishlist and few other things and I can look up what I asked for at will, people around me laughing at funny things they think of as long as they'll say what they were on request, etc. This even if I don't happen to look up/ask every time.
Here are some different kinds of surprises I can think of, it is not a complete schema and I'm not even sure I've picked the most natural boundaries - but I do think it's better than just using the word "surprise" as if we could talk about all these things at once: S1) Being surprised to get something, when I would have been able to predict in advance how I would respond to it. (E.g. a spontaneous gift of chocolate.) S2) Having knowledge withheld in order to experience a more gradual unfolding of knowledge. S2a) Dramatic surprises; getting to follow the story as it progresses without knowing the ending already; being ignorant along with the characters (e.g. No Spoilers!) S2b) Having the answer to a problem or theoretical insight withheld and finding it myself. (E.g. there are only solutions to the odd-numbered problems in the back of the book, or the proof is omitted as an exercise for the reader.) S3) Having future experiences that are good in a way I couldn't even understand now. (There are plenty of experiences I have as a grown-up that would have been incomprehensible to me as a child.) S4) Watching a scary movie and strongly suspecting that a scary monster is about to jump out soon, but not knowing exactly when, or what it will look like. At different times in my life I've had different levels of liking/tolerance/disliking for different types of surprise.
How often do you take advantage of this sort of thing? To be specific, what fraction of books' endings do you look up?
I'm generally content not to look them up until I start wondering if somebody's going to die, or any time the suspense is being laid on with a ladle, or when I'm confused about what I have already read. I read really fast and I'll get to the relevant part soon enough. I don't know about fractions, and it's usually not the ending I'm itching to read - endings need lots of plot setup, I could read them and not understand what was going on. I just want to know if so-and-so lives or if such-and-such disaster occurs. Once I stopped a TV show my best friend was showing me in the middle, when she wouldn't tell me a spoiler answer to a question I asked, so I could leave the room and look it up on Wikipedia. She knew all about me and spoilers, she just couldn't bring herself to say it.
Interesting. For more evidence of human diversity, I often have difficulty not telling people unrequested spoilers in otherwise-similar situations.
How much does this inform your fiction-writing? In most story-telling there is some pretense [] of unpredictability. Some TV shows are exceptions, the kind where the producers basically promise the audience that nothing will change over the half-hour. Another sort of exception is history. Some readers might be upset to have events of The Surgeon of Crowthorne [] spoiled for them, but for the most part it is not regarded as cheating to put down a history book to look up a character bio on Wikipedia. I think it would be an interesting constrained writing [] experiment to structure a novel in the same way, where the author couldn't rely on suspense to keep you interested in the plot. Perhaps there could be appendices with an encyclopedia style entry for each main character and each main event in the book, that the reader was encouraged to skip to as they pleased.
Oh, I'm perfectly capable of inflicting suspense on other people. I've seen it done enough. (But there are spoilers available to be clicked open on all the character pages on [Elcenia] and I'll provide spoilers to anyone who asks nicely.)
What I had more in mind was a pleasant surprise with these characteristics: 1. You go do some activity you know you are going to enjoy. 2. You have planned this activity in advance. 3. You end up enjoying the activity even more than you anticipated you would. 4. Your enhanced enjoyment is due to the properties of the activity, not because an someone drugged you or something. In other words, the pleasant surprise is the strength of your positive emotional reaction to an event, not the event itself. That was the sort of pleasant surprise that I have trouble believing no one would enjoy, and that I would unreservedly not mind an FAI planning for me.
That wouldn't bother me, but it is in no way superior to knowing in advance that I will enjoy the activity the larger amount ahead of time.
It wouldn't be superior for you, but if an FAI, (or just a person trying to live by the principles of Fun Theory) to, is at some point required to simultaneously optimize an environment for both you, and someone who likes pleasant surprises, that would be an effective way to satisfy the other person's desire for pleasant surprises while simultaneously satisfying your desire to never be surprised in certain ways. It would be a good way to fulfill that Principle of Fun without harming any of the people whom it doesn't apply to.
I think that one way to generalize this preference would be as follows: "The Utopia should allow me to voluntarily -- and temporarily -- limit my own capabilities if I so choose, but it should never impose limits on them. In fact, the Utopia should provide me with vastly enhanced capabilities, if I so choose." Thus, you could voluntarily turn down the gain on your Omni-Sensors when browsing the Farmer's Market, while knowing that you could turn them on at any time. Simliarly, you could disengage your Anti-Grav Flight Module in order to climb a mountain, as long as you knew that you could turn it back on at a moment's notice (or even faster than that).
Sure :)
My point was more "unexpected blueberries in the context of a farmer's market" > "no blueberries" > "unexpected blueberries in the context of an unexpected gift".
Actually, going back to Eliezer's article expanding rule 9, Justified Expectation of Pleasant Surprises [], he imagines two possible worlds: My problem with this is that the real world contains enough surprise already, without having to add artificial, fake surprise. I've got to stay alive to find out what happens next, anyway, and I can do without UFNIs sticking their oar in and treating me like their pet cat. ("Oh yes, you do like a surprise, don't you, I just know you do, yes you do, it will be a wonderful surprise yes it will etc.etc.")
[-][anonymous]10y 0

What makes people happy varies from person to person. I think the problem that we should be solving is how to help people make optimal decisions for their joys. Once we get a great solution to that problem, then we can just make a Utopia where people are totally free, except to physically harm others or their property.

And what if for some people the optimal decisions for their joys involve emotionally harming others? Or the optimal decisions for their joys actually require physically harming others or their property, which they're prohibited from doing? What if being that free makes a lot of people really unhappy? (This last is not just plausible, but probable [].)

Reflecting on all of this, it occurred to me that the world of the show "Adventure Time" keeps pretty close to Eliezer's laws (investigating this thoroughly is left as an exercise because it's my bed time)

I definitely think it would be a fun world to live in, but it's certainly not something that I'd force on the world.

That said, in direct contradiction to taw's comment, there are a number of episodes where they spend time having movie night, looking at cute things from the internet, playing video games all day, or having ice cream eating marathons.

Certainly humans should do things, but what if I want to play the flute now and only have flute-whittling down as my activity for century seventeen?

Well, what if you do? The whole point is that getting everything you want is not necessarily a proper utopia.

Sometimes - often - I just want people to tell me stuff.

Again, utopia does not necessarily consist of getting everything you want.

True, but if a Utopia isn't giving you what you want, it must at least give you something better. And I suspect that between two activities that you'll find fun, the one which you are desiring will feel more fun. Sometimes it can be fun to discover things for yourself, but I'd get irritated if I had to figure out everything for myself, much like I'd be irritated if I had to farm and cook and build for myself all the time. There may be a balance to be had, but I think it should be defined by desires. Perhaps I would actually have more fun trying to solve a problem than just having the answer told to me, but I can guarantee you that I would be very irritated if I was forced into spending time solving a problem I didn't want to solve; I value my freedom.