If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
156 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Perhaps it would be beneficial to use a unary numeral system when discussing topics on which biases like scope insensitivity, probability neglect, and placing too much weight on outcomes that are likely to occur. Using a unary numeral system could prevent these biases by presenting a more visual representation of the numbers, which might give readers more intuition on them and thus be less biased about them. Here’s an example: “One study found that people are willing to pay $80 to save || 1000 (2,000) birds, but only $88 to save |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 1000 (200,000 birds).”

Edit: Made it a bit easier to read.


A unary number system is a really fancy name to an ASCII graph :)

Reminds me of the "irony meter" some of my friends use instead of smilies, as smilies are binary, while this can express that something is almost but but not quite serious: [...........|.....]

The popular method right now seems to be using areas of shapes or heights of bars on graphs when this sort of visual representation is necessary. However, I like the way you showed it here, mostly because I have wanted to enter repeating sequences of characters like that into a comment on this site to see what it would look like. ;). I hope people represent numbers with long lines of repeating characters on this website more often. I vote for alternating '0' & 'O'.
Though using bar graphs is pretty, it often seems to take up too much space and takes a bit too long to make in some cases. I suppose both bar graphs and unary numeral systems are useful, and which one to use depends on how much space you're willing to use up. Edit: Also, why alternating 0s and Os? To make counting them easier?
I asked for them because (a)I want to highlight long lines of characters in the LW comment interface and watch the Mac anti-aliasing overlap with itself, which looks cool, and (b)I don't want to just post a series of comments that have no valuable content but are just playing with the reply nesting system and posting repeating lines of characters and whatnot, because I don't want to get down voted into oblivion. Alternating 0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O is visually appealing to me, and I want to see visually appealing things, so I asked to see more visually appealing things on the website. The request was made purely for selfish reasons.
I can see that. Still, 0s and Os take up more space than | and take a bit longer to type due to needing to alternate them.
If you would like to be horrified, represent the number of deaths from WWII in unary in a text document and scroll through it (by copy pasting larger and larger chunks, or by some other method). There are about 4000 "1" characters in a page in MS Word, so at 20 million battle deaths, you'll get about 5000 pages.
If you really want to be horrified, make a document with one "I" for every sentient being whose life would be prevented from an existential catastrophe. Oh wait, that's too many to store in memory...

365tomorrows recently published a hard science-fiction story of mine called "Procrastination", which was inspired by the ideas of Robin Hanson. I believe LessWrong will find it enjoyable.

Nice work. The story is quite uplifting, actually. It would be nice to retain some memory of one's other instances, of course. But still beats having just one physical life.
I thought that the ideas seemed awfully familiar, when the story popped up on 365!

Woody Allen on time discounting and path-dependent preferences:

In my next life I want to live my life backwards. You start out dead and get that out of the way. Then you wake up in an old people's home feeling better every day. You get kicked out for being too healthy, go collect your pension, and then when you start work, you get a gold watch and a party on your first day. You work for 40 years until you're young enough to enjoy your retirement. You party, drink alcohol, and are generally promiscuous, then you are ready for high school. You then go to primary school, you become a kid, you play. You have no responsibilities, you become a baby until you are born. And then you spend your last 9 months floating in luxurious spa-like conditions with central heating and room service on tap, larger quarters every day and then Voila! You finish off as an orgasm!

The rationality gloss is that a naive model of discounting future events implies a preference for ordering experiences by decreasing utility. But often this ordering is quite unappealing!

A related example (attributed to Gregory Bateson):

If the hangover preceded the binge, drunkenness would be considered a virtue and not a vice.


Tsk, tsk. You don't collect your pension or gold watches, or drink alcohol, etc. You pay someone else your pension, give away a gold watch, and un-drink the alcohol.

He didn't say that time flowed backwards, just the order of major life events. And you'd start out collecting your pension out of the nursing home, and give it up when you start working.

A simiar one by Vonnegut:

It was a movie about American bombers in the Second World War and the gallant men who flew them. Seen backwards by Billy, the story went like this: American planes, full of holes and wounded men and corpses took off backwards from an airfield in England. Over France a few German fighter plans flew at them backwards, sucked bullets and shell fragments from some of the planes and crewmen. They did the same for wrecked American bombers on the ground, and those planes flew up backwards to join the formation. The formation flew backwards over a German city that was in flames. The bombers opened their bomb bay doors, exerted a miraculous magnetism which shrunk the fires, gathered them into cylindrical steel containers, and lifted the containers into the bellies of the planes. The containers were stored neatly in racks. The Germans below had miraculous devices of their own, which were long steel tubes. They used them to suck more fragments from the crewmen and planes. But there were still a few wounded Americans, though, and some of the bombers were in bad repair. Over France, though, German fighters came up again, made everything and everybody good as new. When

... (read more)
As Jiro and Toggle point out, this isn't time reversal, this is Benjamin Button disease). I think the original short story, much more than the film, portrays this correctly as a tragi-comedy. For example, he's a Brigadier-General, but he gets laughed out of the army because he looks like a 16-year-old. I wonder about people who think that life would be better lived backwards, or that effect should precede cause. Isn't this the universe telling you "Change your ways" in neon capital letters?
Well, the central thing would seem to be changing aging, which isn't induced by any human actions (although you might say people who live healthier get to age more slowly) - if there's any message from the universe in aging, that message is simply "fuck you for being here".

...supporters say the opposition leader was assassinated to silence him...

I see headlines like this fairly regularly.

Does anybody know of a list of notable opposition leaders, created when all members of the list were alive? Seems like it could be educational to compare the death rate of the list (a) across countries, and (b) against their respective non-notable demographics.

I want to make some new friends outside of my current social circle. I'm looking to meet smart and open minded people. What are some activities I could do or groups I could participate in where I'm likely to meet such people?

I'm particularly interested in personal experiences meeting people, rather than speculation, e.g. "I imagine ballroom dancing would be great" is not as good as "I met my partner and best friend ballroom dancing."

Also of interest would be groups where this is bad, e.g., if ballroom dancing was no good then "I never made any friends ballroom dancing, despite what I initially thought" would be a useful comment.

(I have a small list of candidate groups already, but I want to see what other people suggest to verify my thinking.)

See http://www.meetup.com/
Perhaps I was unclear. It's not that I can't find groups, it's that I want to know which groups have environments more conducive to meeting people of interest to me. For example, I went to a meditation event once and enjoyed it for its stated purpose, but basically everyone left before I could talk to anyone aside from the instructor. Clearly, this meditation event is not what I am looking for.
Speaking of dancing, there was an extra follower at the introductory class at the Fed when Harsh and I went--so if you had come along, it would have been even! Also on the project list now is to make a DDR-style game where you are responding just to the aural stimulus, rather than a visual one, with actual songs and actual dances, to have a single-player way of picking up the right thing to do with your feet and when to do it. (Does this already exist?)
I really enjoy dancing, but I've been doing it for years and haven't really met anyone through it. YMMV, and I've heard many people's M does V. I met most of my friends through reddit meetups.
If you aren't playing already, Magic: The Gathering can be a great hobby for meeting new people. The community trends towards smart (and open-minded, but less clearly so). Most stores have events each Friday. There is some barrier to entry, but I found the game easy enough to grasp.
Join Facebook groups that follow your hobbies or favorite books/films/anime/anything. Wait for scheduled meetup events. Rinse and repeat.
Without knowing about your interest and the kind of people you want to meet it's hard to give targeted advice. It also depends a lot on local customs. One meditation has a culture where the people who attend the event bond together, others don't. As far as dancing goes, the default interaction is physical. If you want to make friends you also have to talk. If talking comes hard to you then dancing won't produce strong friendships. If your goal is to build a social circle it's also vital to attend events together with other people. Constantly going alone to events doesn't fit that purpose.
Good points. I was intentionally keeping things general (and thus vague) for a few reasons. To be more specific, I'm looking for people who are similar to myself. The main restriction here is that I'm looking to meet reasonably smart people, which I think is a prerequisite to knowing me better. (I could be much more specific if you'd like to help me out more, but I'd prefer to take that to a private message.) I'm curious about your thoughts on attending events with other people. Why would this help?
I once read somewhere that being a friend means seeing a person in at least three different contexts. Going to a meditation event together with people from my local LessWrong meetup increases the feeling of friendship between me and the other LW'ler. If I dance Salsa with a girl that I first meet at a Salsa unrelated birthday party it feels like there a stronger friendship bond than if I just see her from time to time at Salsa events. It's important to interact with people in different contexts if you want to build a friendship with them. I can't promise that I have useful advice before knowing the specifics, but I'm happy to take my shoot.
Thanks, this all makes sense. I'll have to take you up on the offer later, as my priorities are shifting now.
I've met some friends swing dancing, so consider it somewhat recommended. I don't know where you are, but you could try starting a local LW meetup group, that sometimes works for me. I don't know what your housing situation is, but if it's currently not contributing to your social life, consider moving into a group house either of people you'd like to get closer with, or with some selection process that makes them likely to be compatible with you.
Thanks for the comment. We already have a local LW meetup, and many of my local friends I've met through there. It's a small but highly appreciated group. The group house idea is excellent. I have read of a number of houses in the area targeting people with certain lifestyles (vegan, recovering alcoholics, etc.) but I never looked that closely into them. Nor have I considered looking for a group house that might not have explicit goals but be composed of people I'd find interesting. I'll take a closer look.

What gets more viewership, an unpromoted post in main or a discussion post? Also, are there any LessWrong traffic stats available?

http://www.alexa.com/siteinfo/lesswrong.com (The recent uptick is due to hpmor, I suppose?)
Probably yes, see: http://www.alexa.com/siteinfo/hpmor.com
Lol yeah ok. I was unsure because alexa says 9% of search traffic to LW is from "demetrius soupolos" and "traute soupolos" so maybe there was some big news story I didn't know about.

Assuming no faster-than-light travel and no exotic matter, a civilization which survives the Great Filter will always be contained in its future light cone, which is a sphere expanding outward with constant speed c. So the total volume available to the civilization at time t will be V(t) ~ t^3. As it gets larger, the total resources available to it will scale in the same way, R(t) ~ V(t) ~ t^3.

Suppose the civilization has intrinsic growth rate r, so that the civilization's population grows as P(t) ~ r^t.

Since resources grow polynomially and population grow... (read more)

I like Eliezer's solution better. Rather than wait until exponential population growth eats all the resources, we just impose population control at the start and let every married couple have a max of two children. That way, population grows at most linearly (assuming immortality).
Broadly speaking, I'm suspicious of social solutions to problems that will persist for geological periods of time. If we're playing the civilization game for the long haul, then the threat of overpopulation could simply wait out any particular legal regime or government. That argument goes hinky in the event of FAI singleton, of course.
Wouldn't this conclusion require that indivudals and their resources are not resources to other individuals? A society that doesn't share their lives in some way is not a community. If you share "the internet" among the number that use it the resultant amount of information migth be quite low, but that doesn't mean that people don't have access to information. In the same way if one chunk of energy is used in one way, it migth fullfill more wishes than just one person. What this means is you either need to cooperate or compete. The only one that can have the universe to himself without contention is the lone survivor. But if you are in a populated universe politics becomes necceary and it sphere widens.

A smuggler's view of learning:)

Knowledge as acquired in school-time (attending + holidays; just about until you graduate almost all your time is governed by school) is like an irregular shoreline with islets of trivia learned through curiosity and rotting marshland lost because reasons and never regained. (We congratulate ourselves for not risking malaria, seeing as we are experienced pirates and all.)

And we forget the layout and move inland, because that's where stuff happens. Jobs, relationships, kids, even dead ends are more grownup then the crumbling ... (read more)


I have not yet read the sequences in full, let met ask, is there maybe an answer to what is bothering me about ethics: why is basically all ethics in the last 300 years or so universalistic? I.e. prescribing to treat everybody without exception according to the same principles? I don't understand it because I think altruism is based on reciprocity. If my cousin is starving and a complete stranger is halfway accross the world is starving even more, and I have money for food, most ethics would figure out I should help the stranger. But from my angle, I am ob... (read more)

Because so much of it comes out of a Christian tradition with a deep presumption of Universalism built into it. But you are not the first person to ask this tradition "What is the value of your values?". Your "reciprocal ethics" might be framed as long-term self-interest, or as a form of virtue ethics. It immediately makes me think of Adam Smith's The Theory of Moral Sentiments. There's a nice discussion on related themes here, or try googling the site for "virtue ethics".
Hm, I would call it "graded ingroup loyalty", to quote an Arab saying "me and by brother against my cousin, me and my cousin against the world". Instead of a binary ingroup and outgroup, other people are gradually more or less your ingroup, spouse more than cousin, cousin more than buddy, buddy more than compatriot, compatriot more than someone really far away.
But note that reciprocity is almost the opposite of loyalty. That kind of tribalism is dysfunctional in the modern world, because: * You can't necessarily rely on reciprocity in those tribal relationships any more * You can achieve reciprocity in non-tribal relationships Rather than a static loyalty, it is more interesting to ask how people move into and out of your ingroup? What elicits our feelings of sympathy for some more than others? What kind of institutions encourage us to sympathise with other people and stand in their shoes? What triggers our moral imagination? I'd tell a story of co-operative trade forcing us to stand in the shoes of other people, to figure out what they want as customers, thus not only allowing co-operation between people with divergent moral viewpoints, but itself giving rise to an ethic of conscientiousness, trustworthiness, and self-discipline. The "bourgeois virtues" out-competing the "warrior ethic."
I think universalism is an obvious Schelling point. Not just moral philosophers find it appealing, ordinary people do it too (at least when thinking about it in an abstract sense). Consider Rawls' "veil of ignorance".
I think one reason is that as soon as one tries to build ethics from scratch, one is unable to find any justification that sounds like "ethics" for favouring those close to oneself over those more distant. Lacking such a magic pattern of words, they conclude that universalism must be axiomatically true. In Peter Singer's view, to fail to save the life of a remote child is exactly as culpable as to starve your own children. His argument consists of presenting the image of a remote child and a near one and challenging the reader to justify treating them unequally. It's not a subject I particularly keep up on; has anyone made a substantial argument against Singerian ethics? It is often observed here that favouring those close to oneself over those more distant is universally practised. It has not been much argued for though. Here are a couple of arguments. 1. It is universally practiced and universally approved of, to favour family and friends. It is, for the most part, also approved of to help more distant people in need; but there are very few who demand that people should place them on an equal footing. Therefore, if there is such a thing as Human!ethics or CEV, it must include that. 2. As we have learned from economics, society in general works better when people look after their own business first and limit their inclination to meddle in other people's. This applies in the moral area as well as the economic.
Wait, I didn't even noticed it. That is interesting! So if something to qualify as a philosophy or theory you need to try to build from scratch? I know people who would consider it hubris. Who would say that it is more like, you can amend and customize and improve on things that were handed to you by tradition, but you can never succeed at building from scratch.
Not necessarily, but that is certainly the currently fashionable approach. Also if you want to convince someone from a different culture, with a different set of assumptions, etc., this is the easiest way to go about doing it.
I am not very optimistic about that happening. I think should write an article about Michael Oakeshott. Basically Oakie was arguing that the cup you are pouring into is never empty. Whatever you tell people they will frame in their previous experiences. So the from-scratch philosophy, the very words, do not mean the same thing to people with different backgrounds. E.g. Hegel's "Geist" does not exactly mean what "spirit" means in English.
That's what philosophers do. Hence such things as Rawls' "veil of ignorance", whereby he founds ethics on the question "how would you wish society to be organised, if you did not know which role you would have in it?" And there are also intellectuals (they tend to be theologians, historians, literary figures, and the like, rather than professional philosophers), who say exactly that. That has the problem of which tradition to follow, especially when the history of all ages is available to us. Shall we reintroduce slavery? Support FGM? Execute atheists? Or shall the moral injunction be "my own tradition, right or wrong", "jede das seine"?
No, that's what some philosophers do. You can't just expel the likes of Michael Oakeshott or Nietzsche from philosophy. Even Rawls claimed at times to be making a political, rather than ethical, argument. The notion that ethics have to be "built from scratch" would be highly controversial in most philosophy departments I'm aware of.
Of all these approaches, only the latest is really worthy of consideration IMHO, different houses, different customs. One thing is clear, namely that things that are largely extict for any given "we" (say, culture, country, and so on) do not constitute a tradition. The kind of reactionary bullshit like reinventing things from centuries ago and somehow calling it traditionalism merely because they are old should not really be taken seriously. A tradition is something that is alive right now, so for the Western civ, it is largely things like liberal democracy, atheism and light religiosity, anti-racism and less-lethal racism. The idea here is that the only thing truly realistic is to change what you already have, inherited things have only a certain elasticity, so you can have modified forms of liberal democracy, more or less militant atheism, a bit more serious or even lighter religiosity, a more or less stringent anti-racism and a more or less less-lethal racism. But you cannot really wander far from that sort of set. This - the reality of only being able to modify things that already exist, and not to create anew, and modify them only to a certain extent - is what I would called a sensible traditionalism, not some kind of reactionary dreams about brining back kings.
I think that is the issue. "Sounds like ethics" when you go back to Kant, comes from Christian universalism. Aristotle etc. were less universal. Is Singer even serious? He made the argument that if I find eating humans wrong, I should find eating animals also wrong because they are not very different. I mean, how isn't it OBVIOUS that would not be an argument against eating animals but an argument for eating humans? Because unethical behavior is the default and ethical is the special case. Take away speciality and it is back to the jungle. To me it is so obvious I hardly even think it needs much discussion... ethics is that thing you do in the special rare cases when you don't do what you want to do, but what you feel you ought to. Non-special ethics is not ethics, unless you are a saint.
I see no reason to doubt that he means exactly what he says. Modus ponens, or modus tollens? White and gold, or blue and black? On the whole, we observe that people naturally care for their children, including those who still live in jungles. There is an obvious evolutionary argument that this is not because this has been drummed into them by ethical preaching without which their natural inclination would be to eat them. To be a little Chestertonian, the obvious needs discussion precisely because it is obvious. Also a theme of Socrates. Some things are justifiably obvious: one can clearly see the reasons for a thing being true. For others, "obvious" just means "I'm not even aware I believe this." As Eliezer put it:
Most people who are against eating human children would also be against eating human children grown in such a way as to not have brains. Yet clearly, few of the ethical arguments apply to eating human children without brains. So the default isn't "ethical behavior", it's "some arbitrary set of rules that may happen to include ethical behavior at times".
1. The Nazi's and Ayn Rands Egoism were in the last 300 years, so no. 2. That said, it is now harder to ignore people in far off lands, and easier to help them. 3. Utilitarianism is popular on LW because, AFAICT, its mathy. 4. You haven't explained why you reciprocal ethics should count as ethics at all.
Of 4: Is there a definition of what counts as ethics? I suppose being universal is part of the definition and then it is defined out. Fine. But the problem is, if Alice or Bob comes and says "Since I am only interested in this sort of thing by definition I am unethical", this is also not accurate, because it does not really predict what they are. They are not necessarily Randian egotists, they may be the super good people who are very reliable friends and volunteer at local soup kitchens and invest into activism to make their city better and so on, they just change the subject if someone talks about the starvation in Haiti. That is not what "unethical" predicts.
I'm talking about reciprocal ethics. Most people would say that volunteering at a soup kitchen is good, but many would change their mind if they heard that some advantage was being expected in return. And if it isnt, in what way is it reciprocal?
Either I really need to write clearer or you need to read with more attention. Above, "I am not even considering the chance of a direct payback, simply the utility of having people I like and associate with not suffer is a utility to me, obviously." Making your city better by making sure all of its members are fed is something that makes you better off. It is not a payback or special advantage, but still a return. It makes the place on the whole more functional and safer and having a better vibe. Of course it is not an investment with positive returns, this is why it is still ethics, there is always some sacrifice made. It is always negative return, just not 0 return like "true" altruism. Rather it is like this: If you have a million utils and invest it into Earth, you get 1 back by making Earth better for you. If you invest it into your country, you get 10 back by making your country better for you. Invest it into your city, you get 1000 back, by making your city better for you. Invest it into your cousin, 10K by making your relatives better for you, your bro, 100K by making your family better for you and so on.
But what would that be objectivily the right way to behave? It seems as if ityou are saying people distant from you are objectively worth less. I think you would need to sell this theory as a compromise between what is right anfpd what is motivating.
Sorry, cannot parse it. My behavior with others does not reflect their objective worth (what is that?) but my goals. Part of my goals may be being virtuous or good, which is called ethics. Or it can be raising the utility of certain people or even all people, but that is also a goal. My behavior with diamonds does not reflect the objective worth of diamonds (do they have any?) but my goals wrt to diamonds. Motivating: yes, that is close to the idea of goals. That is a good approach. How about this: if you want to work from the angle of objective worth, well, you too do not worth objectively less than others. So basically you want your altruism to be a kind of reciprocal contract: "I have and you not, so I give you, but if it is ever so in the future that you have and I not you should give me too, because I do not worth less than you." If that sounds okay, then the next stage could be working from the idea that this is not a clearly formulated, signed contract, but more of a tacit agreement of mutual cooperation if and when the need arises, and then you get you have more of such a tacit agreement with people closer to you.
Maybe that's what it feels like for you. My altruistic side feeds on my Buddhist ethics: I am just like any other human, so their suffering is not incomprehensible to me, because I have suffered too. I can identify with their aversion to suffering because that's exactly the same aversion to suffering that I feel. It has nothing to do with exchange or expected gain.
That is interesting that you mention that, because I spent years going to Buddhist meditation centers (of the Lama Ole type) and at some level still identify with it. However I never understood it as a sense of ethical duties or maxims I must exert my will to follow, but rather a set of practices that will put me in a state of bliss and natural compassion where I won't need to exert wil in this regard, goodness will just naturally flow from me. In this sense I am not even sure Buddhist ethics even exists if we define ethics as something you must force yourself to follow even if you really not feel like doing so. And I have always seen compassion in the B. sense as a form of gain to yourself - reducing the ego by focusing on other people's problems, thus our own problems will look smaller because we see our own self as something less important. (I don't practice it much anymore, because I realized if a "religion" is based on reincarnation there is no pressing need to work on it right now, it is not like I can ever be too late for that bus, so you should only work on it if you really feel like doing so. And frankly, these years I feel like being way more "evil" than Ole :) )

Note: This post raises a concern about the treatment of depression.

If we treat depression with something like medication, should we be worried about people getting stuck in bad local optima, because they no longer feel bad enough that the pain of changing environments seems small by comparison? For example, consider someone in a bad relationship, or an unsuitable job, or with a flawed philosophic outlook, or whatever. The risk is that you alleviate some of the pain signal stemming from the lover/job/ideology, and so the patient never feels enough pressure... (read more)

I am neither a medical professional, nor have I ever been treated for depression, but my impression is that being depressed is itself a more serious risk factor for getting stuck in bad local optima like that; as well as making sufferers feel bad it also tends to reduce how much how they feel varies. I haven't heard that giving depressed people antidepressants reduces the range of their affective states.
It depends on the type of local optimum. I am reasonably sure that becoming too depressed to do enough work to stay in was the only was I could have gotten out of graduate school given my moral system at the time. (I hated being there but believed I had an obligation to try to contribute to human knowledge.) Also flat affect isn't at all a universal effect of antidepressant usage, but it does happen for some people.
Isn't flat affect also a rather common effect of depression?
It happens but again it's not at all universal. Scott Alexander seems to think emotional blunting is a legitimate effect of SSRIs, not just a correlation–causation confusion. He also notes that
You assume that someone who's depressed is more motivated to change than a person who isn't depressed. Depression usually comes with reduced motivation to do things. A lot of depression mediation even comes with warnings that it might increase suicide rates because the person feels more drive to take action.
Yvain has written this and many other comprehensive posts on that topic (in the same blog).

It seems people make friends two ways:

1) chatting people and finding each other interesting

2) going through difficult shit together and thus bonding, building camaraderie (see: battlefield or sports team friendships)

If your social life lags and 1) is not working, try 2)

My two best friends come from a) surviving a "deathmarch" project that was downright heroic (worst week was over 100 hours logged) together b) going to a university preparation course, both get picked on by the teacher who did not like us, and then both failing the entry exam in ... (read more)

Mountaineering or similar extreme activities is one option.
I am now imagining someone engineering a great disaster or battle solely so they can make friends, who will, naturally, turn on them once they discover what happened. I'm given to believe that going through lots of fun things together can be friendship-building, if not quite the same as going through lots of difficult things together.
Things can be both fun and difficult, and that category seems to be the obvious kind to look for when you want to intentionally put yourself through it. The problem then is that with most such things, people attempt difficult-but-fun projects or adventures with people they're already friends with to at least some degree, so you'll have to look for such an opportunity or create it yourself.
Well, it is not that bad, thankfully. Just imagine a friendly soccer match between two village's teams. Putting in your damndest to win it is already a significantly more difficult thing than everyday life and creates bonding between team members. Since life started to get too easy for some people - and for some people, that was really long ago - they started to generate artificial difficulties to make things more exciting, sports, games like poker, gambling, and so on. Then what am I even asking? I am mainly just confused by choice and return on investment. Suppose I have not much interest nor time to invest in learning hobbies, yet would be willing to pay this tax for bonding, and would be looking for a team activity that feels difficult and uncertain enough to generate bonding. The kind of thing people later brag about. What would be the most effective one, I wonder.
Those two factors do matter, but the don't go to the meat of the issue. Given that you speak German I would recommend you to read Wahre Männerfreundschaft (disclosure: the author is a personal friend). Various initiation rituals of fraternities use that mechanism.
Interesting! Practically an artofmanliness.com in German? I didn't know this exists. I actually like it - I thought our European culture is too "civilized" for this. Also useful as language practice for me - I am a textbook "kitchen speaker", perfectly fluent but crappy grammar. Thanks a lot of this idea. I was asking around on Reddit about interesting German language blogs years ago, and generally I got boring recommendations, so if you have another a few, please shoot. I think the German language blogosphere and journalism suffers from a generic boredom problem esp. in Austria, I have no idea who reads diepresse.com or derstandard.at without falling asleep. I think the English-language journosphere is better at presenting similar topics in more engaging ways e.g. The Atlantic. There is a time and place for those, such as universities, either the American "Greek letter culture" or the old German "putting scars on each others faces with foils" kind. I don't think similar organizations compatible with family fathers approaching 40 exist. However, I hope once I get good enough at boxing to be allowed to spar full force, I will make some marvelous friendships through giving each other bruises, same logic as the face-scar fencing stuff.

If there is a way of copying and pasting or importing text of a google doc into an article while retaining LessWrong's default formatting, would be very happy to know it....

Turns out you're not the only one who wants to know this. Seems your best bet is to use C-S-v to paste raw text and then format it in the article editor.
Worked, thank you....

Where do I start reading about this AI superintelligence stuff from the very basics on? I would especially be interesed in this: why do we consider our current paradigms of software and hardware 1) close enough to human intelligence in order to base a superintelligence on 2) why don't we think by the time we get there the paradigms will be different? I.e. AI rewriting its own source code? Why do we think AI is a software? Why do we think a software-hardware separation will make sense? Why do we think software will have a source code as we know it? Why woul... (read more)

One obvious source if you haven’t already read it is Nick Bostrom’s Superintelligence. Bostrom addresses many of the issues that you list, e.g. an AI rewriting its own software, why an AI is likely to be software (and Bostrom discusses one or two non-software scenarios as well), etc. This book is quite informative and well worth reading, IMO.

Some of your questions are more fundamental than what is covered in Superintelligence. Specifically, to understand why “alphabetical letters invented thousands of years ago to express human sounds” are adequate for any computing task, including AI, you should explore the field of theoretical computer science, specifically automata and language theory. A classic book in that field is Hopcroft and Ullman’s Introduction to Automata Theory, Languages and Computation (caution: don’t be fooled by the “cute” cover illustration; this book is tough to get through and assumes that the reader has a strong mathematics background). Also, you should consider reading books on the philosophy of mind – but I have not read enough in this area to make specific recommendations.

To explore the question of “why do we think software will have a source code as we know ... (read more)

But current neural nets don't have source code as we know it: the intelligence is coded very implicitly into the weighting, post training, and the source code explicitly specifies a net that doesn't do anything.
It is true that much of the intelligence in a neural network is stored implicitly in the weights. The same (or similar) can be said about many other machine-learning techniques. However, I don't think that anything I said above indicated otherwise.

Regulation to prevent forming space junk seems beneficial, as space junk could create Kessler syndrome, which would make it much harder to colonize space, which would increase existential risk, as without space colonization a catastrophe on Earth could kill off all intelligent Earth-originating life.

I know this isn't completely on-topic, but I don't know of any forum on x-risk, so I don't know of any better place to put it. On a related note, is there any demand for an x-risk forum? Someone (such as myself) should make one if there is enough demand for it.

There is a general problem that a commons transitions from abundant to tragic as demand grows. At what point do you introduce some kind of centralized regulation (eg, property rights)? How do you do that? But space is nowhere near that point. Not worrying about Kessler syndrome is the right answer. And if it were going to be a problem in the foreseeable future, there are very few users of space, so they could easily negotiate a solution. If you expect that in the future every city of a million people will be sovereign with its own space program, then there is more of a tragic commons, but in that scenario space is the least of your problems.
I'm not so sure that space around Earth is nowhere near that point. There is a concern that a collision with the single large satellite Envisat could trigger Kessler Syndrome, and "two catalogued objects pass within about 200m of it every year."

Does Netflix have a shortage of fictional content that stimulates your mind?


My answer is yes.

In Pascal's Mugging, the problem seems to be using expected values, which is highly distorted by even a single outlier.

The post led to a huge number of proposed solutions. They all seem pretty bad, and none of them even address the problem itself, just the specific thought experiment. And others, like bounding the utility function, are ok, but not really elegant. We don't really want to disregard high utility futures, we just don't want them to highly distort our decision process. But if we make decisions based on expected utility, they inevitably do.

So w... (read more)

VNM utility is basically defined as "that function whose expectation we maximize". There exists such a function as long as you obey some very unobjectionable axioms. So instead of saying "I do not want to maximize the expectation of my utility function U", you should say "U is not my utility function".

The problem with this argument, is that it boils down to, if we accept intuitive axioms X we get counter-intuitive result Y. But why is ~Y any less worthy of being an axiom then X?
You miss my point. I am objecting to those axioms. I don't want to change my utility function. If God is real, perhaps he really could offer infinite reward or infinite punishment. You might really think murdering 3^^^^3 people is just that bad. However these events have such low probability that I can safely choose to ignore them, and that's a perfectly valid choice. Maximizing expected utility means you will almost certainly do worse in the real world than an agent that doesn't.
Which axiom do you reject?
Continuity, I would say.
That makes no sense in context, since continuity is equivalent to saying (roughly) 'If you prefer staying on this side of the street to dying, but prefer something on the other side of the street to staying here, there exists some probability of death which is small enough to make you prefer crossing the street.' This sounds almost exactly like what Houshalter is arguing in the great-grandparent ("these events have such low probability that I can safely choose to ignore them,") so it can't be the axiom s/he objects to. I could see objecting to Completeness, since in fact our preferences may be ill-defined for some choices. I don't know if rejecting this axiom could produce the desired result in Pascal's Mugging, though, and I'd half expect it to cause all sorts of trouble elsewhere.
That sounds right, actually.

That is, if you look at the space of all possible outcomes, and select the point where exactly 50% of them are better, and exactly 50% are worse. Choose actions so that this median future is the best.

This seems vulnerable to the following bet: I roll a d6. If I roll 3+, I give you a dollar. Otherwise I shoot you.

I mention that vulnerability further down. Obviously it doesn't fit human decision making either, but I think it's qualitatively closer. An example of an algorithm that's closer to the desired behavior would be to sample n counterfactuals from your probability distribution. Then take the average of these n outcomes, and take the median of this entire setup. E.g. so 50% of the time the average of the n outcomes is higher, and 50% of the time it's lower. As n approaches infinity it becomes equivalent to expected utility, and as it approaches 1 it becomes median expected utility. A reasonable value is probably a few hundred. So that you select outcomes where you come out ahead the vast majority of the time, but still take low probability risks or ignore low probability rewards.
This might sound silly, but it's deeper than it looks: the reason why we use the expected value of utility (i.e. means) to determine the best of a set of gambles is because utility is defined as the thing that you maximize the expected value of. The thing that's nice about VNM utility is that it's mathematically consistent. That means we can't come up with a scenario where VNM utility generates silly outputs with sensible inputs. Of course we can give VNM silly inputs and get silly outputs back--scenarios like Pascal's Mugging are the equivalent of "suppose something really weird happens; wouldn't that be weird?" to which the answer is "well, yes." The really nice thing about VNM is that it's the only rule that's mathematically consistent with itself and a handful of nice axioms. You might give up one of those axioms, but for any of those axioms we can show an example where a rule that doesn't follow that axiom will take sensible inputs and give silly outputs. So I don't think there's much to be gained by trying to replace a mean decision rule with a median decision rule or some other decision rule--but there is a lot to be gained by sharpening our probability distributions and more clearly figuring out our mapping from world-histories to utilities.
To me, consequentialism is either something trivial or something I reject, but that said this is a fully general (and kind of weak) counter-argument. I can apply it to Newcomb from the CDT point of view. I can apply it to Smoking Lesion from the EDT point of view! I can apply it to astronomic data from the Ptolemy's theory of celestial motion point of view! We have to deal with things!
I hesitate to call consequentialism trivial, because I wouldn't use it to describe a broad class of 'intelligent' agents, but I also wouldn't reject it, because it does describe the design of those agents. I don't see it as a counter-argument. In general, I think that a method is appropriate if hard things are hard and easy things are easy--and, similarly, normal things are normal and weird things are weird. If the output is weird in the same way that the input is weird, the system is behaving appropriately; if it adds or subtracts weirdness, then we're in trouble! For example, suppose you supplied a problem with a relevant logical contradiction to your decision algorithm, and it spat out a single numerical answer. Is that a sign of robustness, or lack of robustness?
I just meant I accept the consequentialist idea in decision theory that we should maximize, e.g. pick the best out of alternatives. But said in this way, it's a trivial point. I reject more general varieties of consequentialism (for reasons that are not important right now, but basically I think a lot of weird conclusions of consequentialism are due to modeling problems, e.g. the set up that makes consequentialism work doesn't apply well). I don't know what you are saying here. Can you taboo "weird?" Newcomb is weird for CDT because it explicitly violates an assumption CDT is using. The answer here is to go meta and think about a family of decision theories of which CDT is one, indexed by their assumption sets.
I understood and agree with that statement of consequentialism in decision theory--what I disagree with is that it's trivial that maximization is the right approach to take! For many situations, a reflexive agent that does not actively simulate the future or consider alternatives performs better than a contemplative agent that does simulate the future and considerate alternatives, because the best alternative is "obvious" and the acts of simulation and consideration consume time and resources that do not pay for themselves. That's obviously what's going on with thermostats, but I would argue is what goes on all the way up to the consequentialism-deontology divide in ethics. I would probably replace it with Pearl's phrase here, of "surprising or unbelievable." To use the specific example of Newcomb's problem, if people find a perfect predictor "surprising or unbelievable," then they probably also think that the right thing to do around a perfect predictor is "surprising or unbelievable," because using logic on an unbelievable premise can lead to an unbelievable conclusion! Consider a Mundane Newcomb's problem which is missing perfect prediction but has the same evidential and counterfactual features: that is, Omega offers you the choice of one or two boxes, you choose which boxes to take, and then it puts a million dollars in the red box and a thousand dollars in the blue box if you choose only the red box and it puts a thousand dollars in the blue box if you choose the blue box or no boxes. Anyone that understands the scenario and prefers more money to less money will choose just the red box, and there's nothing surprising or unbelievable about it. What is surprising is the claim that there's an entity who can replicate the counterfactual structure of the Mundane Newcomb scenario without also replicating the temporal structure of that scenario. But that's a claim about physics, not decision theory!
Absolutely. This is the "bounded rationality" setting lots of people think about. For instance, Big Data is fashionable these days, and lots of people think about how we may do usual statistics business under severe computational constraints due to huge dataset sizes, e.g. stuff like this: http://www.cs.berkeley.edu/~jordan/papers/blb_icml2012.pdf ---------------------------------------- But in bounded rationality settings we still want to pick the best out of our alternatives, we just have a constraint that we can't take more than a certain amount of resources to return an answer. The (trivial) idea of doing your best is still there. That is the part I accept. But that part is boring, thinking of the right thing to maximize is what is very subtle (and may involve non-consequentialist ideas, for example a decision theory that handles blackmail may involve virtue ethical ideas because the returned answer depends on "the sort of agent" someone is).
I don't agree. Utility is a separate concept from expected value maximization. Utility is a way of ordering and comparing different outcomes based on how desirable they are. You can say that one outcome is more desirable than another, or even quantify how many times more desirable it is. This is a useful and general concept. Expected utility does have some nice properties being completely consistent. However I argued above that this isn't a necessary property. It adds complexity, sure, but if you self modify your decision making algorithm or predetermine your actions, you can force your future self to be consistent with your present self's desires. Expected utility is perfectly rational as the number of "bets" you take goes to infinity. Rewards will cancel out the losses in the limit, and so any agent would choose to follow EU regardless of their decision making algorithm. But as the number of bets becomes finite, it's less obvious that this is the most desirable strategy. Pascal's Mugging isn't "weird", it's perfectly typical. There are probably an infinite number of pascal's mugging type situations. Hypotheses with exceedingly low probability but high utility. If we built an AI today, based on pure expected utility, it would most likely fail spectacularly. These low probability hypotheses would come to totally dominate it's decisions. Perhaps it would start to worship various gods and practice rituals and obeying superstitions. Or something far more absurd we haven't even thought of. And if you really believe in EU, you can't say that this behavior is wrong or undesirable. This is what you should be doing, if you could, and you are losing a huge amount of EU by not doing it. You should want more than anything in existence, the ability to exactly calculate these hypotheses so you can collect that EU. I don't want that though. I want a decision rule such that I am very likely to end up in a good outcome. Not one where I will mostly likely end up in a very subo
Expected utility is convenient and makes for a nice mathematical theory. It also makes a lot of assumptions. One assumes that the expectation does, in fact, exist. It need not. For example, in a game where two players toss a fair coin, we expect that in the long run the number of heads should equal the number of tails at some point. It turns out that the expected waiting time is infinite. Then there's the classic St. Petersburg paradox. There are examples of "fair" bets (i.e. expected gain is 0) that are nevertheless unfavorable (in the sense that you're almost certain to sustain a net loss over time). Expected utility is a model of reality that does a good job in many circumstances but has some key drawbacks where naive application will lead to unrealistic decisions. The map is not the territory, after all.
To Bentham, sure; today, we call something that generic "ranking" or something similar, because VNM-utility is the only game in town when it comes to assigning real-valued desirabilities to consequences. Disagreed. The proof of the VNM axioms goes through for a single bet; I recommend you look that up, and then try to create a counterexample. Note that it's easy to come up with a wrong utility mapping. One could, say, map dollars linearly to utility and then say "but I don't prefer a half chance of $100 and half chance of nothing to a certain $50!" , but that's solved by changing the utility mapping from linear from sublinear (say, log or sqrt or so on). In order to exhibit a counterexample it has to look like the Allais paradox, where someone confirms two preferences and then does not agree with the consequence of those preferences considered together. It probably isn't the case that there are an infinite number of situations where the utility times the probability is higher than the cost, and if there are, that's probably a faulty utility function or faulty probability estimator rather than a faulty EU calculation. Consider this bit from You and Your Research by Hamming: An AI might correctly calculate that time travel is the most positive technology it could possibly develop--but also quickly calculate that it has no idea where to even start, and so the probability of success from thinking about it more is low enough that it should go for a more credible option. That's what human thinkers do and it doesn't seem like a mistake in the way that the Allais paradox seems like a mistake.
Pascal's wager is the counterexample, and it's older than VNM. EY's Pascal's mugging was just an attempt to formalize it a bit more and prevent silly excuses like "well what if we don't allow infinites or assume the probabilities exactly cancel out." Counterexample in that it violates what humans want, not that it produces inconsistent behavior or anything. It's perfectly valid for an agent to follow EU, as it is for it to follow my method. What we are arguing about is entirely subjective. If you really believe in EU a priori, then no argument should be able to convince you it is wrong. You would find nothing wrong with Pascal situations, and totally agree with the result of EU. You wouldn't have to make clever arguments about the utility function or probability estimates to get out of it. This is pretty thoroughly argued in the original Pascal's Mugging post. Hypotheses of vast utility can grow much faster than their improbability. The hypothesis "you will be rewarded/tortured 3^^^3 units" is infinitesimally smaller in an EU calculation to the hypothesis "you will be rewarded/tortured 3^^^^^^^3 units", and only takes a few more bits to express, and it can grow even further.
Counterexample in what sense? If you do in fact receive infinite utility from going to heaven, and being Christian raises the chance of you going to heaven by any positive amount over your baseline chance, then it is the right move to be Christian instead of baseline. The reason people reject Pascal's Wager or Mugging is, as I understand it, they don't see the statement "you receive infinite utility from X" or "you receive a huge amount of disutility from Y" as actual evidence about their future utility. In general, I think that any problem which includes the word "infinite" is guilty until proven innocent, and it is much better to express it as a limit. (This clears up a huge amount of confusion.) And the general principle- that as the prize for winning a lottery gets better, the probability of winning the lottery necessary to justify buying a fixed-price ticket goes down, seems like a reasonable principle to me. I think money pumps argue against subjectivity. Basically, if you use an inconsistent decision theory, someone else can make money off your inconsistency or you don't actually use that inconsistent decision theory. I will say right now: I believe that if you have a complete set of outcomes with known utilities and the probabilities of achieving those outcomes conditioned on taking actions from a set of possible actions, the best action in that set is the one with the highest probability-weighted utility sum. That is, EU maximization works if you feed it the right inputs. Do I think it's trivial to get the right inputs for EU maximization? No! I'm not even sure it's possible except in approximation. Any problem that starts with utilities in the problem description has hidden the hard work under the rug, and perhaps that means they've hidden a ridiculous premise. Assuming a particular method of assigning prior probabilities to statements, yes. But is that the right method of assigning prior probabilities to statements? (That is, yes, I've read Eliezer
Where "right" is defined as "maximizing expected utility", then yes. It's just a tautology, "maximizing expected utility maximizes expected utility". My point is if you actually asked the average person, even if you explained all this to them, they would still not agree that it was the right decision. There is no law written into the universe that says you have to maximize expected utility. I don't think that' what humans really want. If we choose to follow it, in many situation it will lead to undesirable outcomes. And it's quite possible that those situations are actually common. It may mean life becomes more complicated than making simple EU calculations, but you can still be perfectly consistent (see further down.) You could express it as a limit trivially (e.g. a hypothesis that in heaven you will collect 3^^^3 utilons per second for an unending amount of time.) Sounds reasonable, but it breaks down in extreme cases, where you end up spending almost all of your probability mass in exchange for a single good future with arbitrarily low probability. Here's a thought experiment. Omega offers you tickets for 2 extra lifetimes of life, in exchange for a 1% chance of dying when you buy the ticket. You are forced to just keep buying tickets until you finally die. Maybe you object that you discount extra years of life by some function, so just modify the thought experiments so the reward increase factorially per ticket bought, or something like that. Fortunately we don't have to deal with these situations much, because we happen to live in a universe where there aren't powerful agents offering us very high utility lotteries. But these situations occur all the time if you deal with hypotheses instead of lotteries. The only reason we don't notice it is because we ignore or refuse to assign probability estimates to very unlikely hypotheses. An AI might not, and so it's very important to consider this issue. My method isn't vulnerable to money pumps, as is an infi
Yes. That's the thing that sounds silly but is actually deep. That is the objection, but I think I should explain it in a more fundamental way. What is the utility of a consequence? For simplicity, we often express it as a real number, with the caveat that all utilities involved in a problem have their relationships preserved by an affine transformation. But that number is grounded by a gamble. Specifically, consider three consequences, A, B, and C, with u(A)<u(B)<u(C). If I am indifferent between B for certain and A with probability p and C otherwise, I encode that with the mathematical relationship: u(B)=p u(A)+(1-p) u(C) As I express more and more preferences, each number is grounded by more and more constraints. The place where counterexamples to EU calculations go off the rails is when people intervene at the intermediate step. Suppose p is 50%, and I've assigned 0 to A, 1 to B, and 2 to C. If a new consequence, D, is introduced with a utility of 4, that immediately implies: 1. I am indifferent between (50% A, 50% D) and (100% C). 2. I am indifferent between (75% A, 25% D) and (100% B). 3. I am indifferent between (67% B, 33% D) and (100% C). If one of those three statements is not true, I can use that and D having a utility of 4 to prove a contradiction. But the while the existence of D and my willingness to accept those specific gambles implies that D's utility is 4, the existence of the number 4 does not imply that there exists a consequence where I'm indifferent to those gambles! And so very quickly Omega might have to offer me a lifetime longer than the lifetime of the universe, and because I don't believe that's possible I say "no thanks, I don't think you can deliver, and in the odd case where you can deliver, I'm not sure that I want what you can deliver." (This is the resolution of the St. Petersburg Paradox where you enforce that the house cannot pay you more than the total wealth of the Earth, in which case the expected value of the bet
This is a cop out. Obviously that specific situation can't occur in reality, that's not the point. If your decision algorithm fails in some extreme cases, at least confess that it's not universal. Same thing. Omega's ability and honesty are premises. The point of the thought experiment is just to show that EU is required to trade away huge amounts of outcome-space for really good but improbable outcomes. This is a good strategy if you plan on making an infinite number of bets, but horrible if you don't expect to live forever. I don't get your drug research analogy. There is no pascal's equivalent situation in drug research. At best you find a molecule that cures all diseases, but that's hardly infinite utility. Instead it would be more like, "there is a tiny, tiny probability that a virus could emerge which causes humans not to die, but to suffer for eternity in the worst pain possible. Therefore, by EU calculation, I should spend all of my resources searching for a possible vaccine for this specific disease, and nothing else."
What does it mean for a decision algorithm to fail? I'll give an answer later, but here I'll point out that I do endorse that multiplication of reals is universal--that is, I don't think multiplication breaks down when the numbers get extreme enough. And an unbelievable premise leads to an unbelievable conclusion. Don't say that logic has broken down because someone gave the syllogism "All men are immortal, Socrates is a man, therefore Socrates is still alive." How does [logic work]? Eliezer puts it better than I can: EU is not "required" to trade away huge amounts of outcome-space for really good but improbable outcomes. EU applies preference models to novel situations, not to produce preferences but to preserve them. If you gave EU a preference model that matched your preferences, it will preserve the match and give you actions that best satisfy your preferences in underneath the uncertainty model of the universe you gave it. And if it's not true that you would trade away huge amounts of outcome-space for really good but improbable outcomes, this is a fact about your preference model that EU preserves! Remember, EU preference models map lists of outcomes to classes of lists of real numbers, but the inverse mapping is not guaranteed to have support over the entire reals. I think a decision algorithm fails if it makes you predictably worse off than an alternative algorithm, and the chief ways to do so are 1) to do the math wrong and be inconsistent and 2) to make it more costly to express your preferences or world-model. We have lots of hypotheses about low-probability, high-payout options, and if humans make mistakes, it is probably by overestimating the probability of low-probability events and overestimating how much we'll enjoy the high payouts, both of which make us more likely to pursue those paths than a rational version of ourselves. So it seems to me that if we have an algorithm that can correctly manage the budget of a pharmaceutical corporation, ba
When it makes decisions that are undesirable. There is no point deciding to run a decision algorithm which is perfectly consistent but results in outcomes you don't want. In the case of the Omega's-life-tickets scenario, one could argue it fails in an objective sense since it will never stop buying tickets until it dies. But that wasn't even the point I was trying to make. I don't know if there is a name for this fallacy but there should be. Where someone objects to the premises of a hypothetical situation intended just to demonstrate a point. E.g. people who refuse to answer the trolley dilemma and instead say "but that will probably never happen!" It's very frustrating. This is very subtle circular reasoning. If you assume your goal is to maximize the expected value some utility function, then maximizing expected utility can do that if you specify the right utility function. What I've been saying from the very beginning is that there isn't any reason to believe there is any utility function that will produce desirable outcomes if fed to an expected utility maximizer. Even if you are an EU maximizer, EU will make you "predictably" worse off, as in the majority of cases you will be worse off. A true EU maximizer doesn't care so long as the utility of the very low probability outcomes is high enough.
One name is fighting the hypothetical, and it's worth taking a look at the least convenient possible world and the true rejection as well. There are good and bad reasons to fight the hypothetical. When it comes to these particular problems, though, the objections I've given are my true objections. The reason I'd only pay a tiny amount of money for the gamble in the St. Petersburg Paradox is that there is only so much financial value that the house can give up. One of the reasons I'm sure this is my true objection is because the richer the house, the more I would pay for such a gamble. (Because there are no infinitely rich houses, there is no one I would pay an infinite amount to for such a gamble.) I'm not sure why you think it's subtle--I started off this conversation with: But I don't think it's quite right to call it "circular," for roughly the same reasons I don't think it's right to call logic "circular." To make sure we're talking about the same thing, I think an expected utility maximizer (EUM) is something that takes both a function u(O) that maps outcomes to utilities, a function p(A->O) that maps actions to probabilities of outcomes, and a set of possible actions, and then finds the action out of all possible A that has the maximum weighted sum of u(O)p(A->O) over all possible O. So far, you have not been arguing that every possible EUM leads to pathological outcomes; you have been exhibiting particular combinations of u(O) and p(A->O) that lead to pathological outcomes, and I have been responding with "have you tried not using those u(O)s and p(A->O)s?". ---------------------------------------- It doesn't seem to me that this conversation is producing value for either of us, which suggests that we should either restart the conversation, take it to PMs, or drop it.
This suggests buying tickets takes finite time per ticket, and that the offer is perpetually open. It seems like you could get a solid win out of this by living your life, buying one ticket every time you start running out of life. You keep as much of your probability mass alive as possible for as long as possible, and your probability of being alive at any given time after the end of the first "lifetime" is greater than it would've been if you hadn't bought tickets. Yeah, Omega has to follow you around while you go about your business, but that's no more obnoxious than saying you have to stand next to Omega wasting decades on mashing the ticket-buying button.
Ok change it so the ticket booth closes if you leave.
That's not the way in which maximizing expected utility is perfectly rational. The way it's perfectly rational is this. Suppose you have any decision making algorithm; if you like, it can have an internal variable called "utility" that lets it order and compare different outcomes based on how desirable they are. Then either: * the algorithm has some ugly behavior with respect to a finite collection of bets (for instance, there are three bets A, B, and C such that it prefers A to B, B to C, and C to A), or * the algorithm is equivalent to one which maximizes the expected value of some utility function: maybe the one that your internal variable was measuring, maybe not.
The first condition is not true, since it gives a consistent value to any probability distribution of utilities. The second condition is not true other since the median function is not merely a transform of the mean function. I'm not sure what the "ugly" behavior you describe is, and I bet it rests on some assumption that's too strong. I already mentioned how inconsistent behavior can be fixed by allowing it to predetermine it's actions.
You can find the Von Neumann--Morgenstern axioms for yourself. It's hard to say whether or not they're too strong. The problem with "allowing [the median algorithm] to predetermine its actions" is that in this case, I no longer know what the algorithm outputs in any given case. Maybe we can resolve this by considering a case when the median algorithm fails, and you can explain what your modification does to fix it. Here's an example. Suppose I roll a single die. * Bet A loses you $5 on a roll of 1 or 2, but wins you $1 on a roll of 3, 4, 5, or 6. * Bet B loses you $5 on a roll of 5 or 6, but wins you $1 on a roll of 1, 2, 3, or 4. Bet A has median utility of U($1), as does bet B. However, combined they have a median utility of U(-$4). So the straightforward median algorithm pays money to buy Bet A, pays money to buy Bet B, but will then pay money to be rid of their combination.
I think I've found the core of our disagreement. I want an algorithm that considers all possible paths through time. It decides on a set of actions, not just for the current time step, but for all possible future time steps. It chooses such that the final probability distribution of possible outcomes, at some point in the future, is optimal according to some metric. I originally thought of median, but it can work with any arbitrary metric. This is a generalization of expected utility. The VNM axioms require an algorithm to make decisions independently and Just In Time. Whereas this method lets it consider all possible outcomes. It may be less elegant than EU, but I think it's closer to what humans actually want. Anyway your example is wrong, even without predetermined actions. The algorithm would buy bet A, but then not buy bet B. This is because it doesn't consider bets in isolation like EU, but considers it's entire probability distribution of possible outcomes. Buying bet B would decrease it's expected median utility, so it wouldn't take it.
No, they don't.
Assuming the bet has a fixed utility, then EU gives it a fixed estimate right away. Whereas my method considers it along with all other bets that it's made or expects to make, and it's estimate can change over time. I should have said that it's not independent or fixed, but that is what I meant.
In the VNM scheme where expected utility is derived at a consequence of the axioms, the way that a bet's utility changes over time is that its utility is not fixed. Nothing at all stops you from changing the utility you attach to a 50:50 gamble of getting a kitten versus $5 if your utility for a kitten (or for $5) changes: for example, if you get another kitten or win the lottery. Generalizing to allow the value of the bet to change when the value of the options did not change seems strange to me.
I am lost, this is just EU in a longitudinal setting? You can average over lots of stuff. Maximizing EU is boring, it's specifying the right distribution that's tricky.
It's not EU, since it can implement arbitrary algorithms to specify the desired probability distribution of outcomes. Averaging utility is only one possibility, another I mentioned was median utility. So you would take the median utility of all the possible outcomes. And then select the action (or series of actions in this case) that leads to the highest median utility. No method of specifying utilities would let EU do the same thing, but you can trivially implement EU in it, so it's strictly more general than EU.
So, I think you might be interested in UDT. (I'm not sure what the current best reference for that is.) I think that this requires actual omniscience, and so is not a good place to look for decision algorithms. (Though I should add that typically utilities are defined over world-histories, and so any decision algorithm typically identifies classes of 'equivalent' actions, i.e. acknowledges that this is a thing that needs to be accepted somehow.)
UDT is overkill. The idea that all future choices can be collapsed into a single choice appears in the work of von Neumann and Morgenstern, but is probably much older.
Oh, I see. I didn't take that problem into account, because it doesn't matter for expected utility, which is additive. But you're right that considering the entire probability distribution is the right thing to do, and under than assumption we're forced to be transitive. The actual VNM axiom violated by median utility is independence: If you prefer X to Y, then a gamble of X vs Z is preferable to the equivalent gamble of Y vs Z. Consider the following two comparisons: * Taking bet A, as above, versus the status quo. * A 2/3 chance of taking bet A and a 1/3 chance of losing $5, versus a 2/3 chance of the status quo and a 1/3 chance of losing $5. In the first case, bet A has median utility U($1) and the status quo has U($0), so you pick bet A. In the second case, a gamble with a possibility of bet A has median utility U(-$5) and a gamble with a possibility of the status quo still has U($0), so you pick the second gamble. Of course, independence is probably the shakiest of the VNM axioms, and it wouldn't surprise me if you're unconvinced by it.
This is a common problem which is handled by robust statistics. Means, while efficient, are notably not robust. The median is a robust alternative from the class of L-estimators (L is for Linear), but a popular alternative for location estimates nowadays is something from the class of M-estimators (M is for Maximum Likelihood).
Maximum Likelihood doesn't really lead to desirable behavior when the number of possibilities is very large. E.g. i roll a dice with a 2 and a 3 give you a dollar, and unrelated but horrible things happen on any other number.
Maximum likelihood means taking the outcome with the highest probability relative to everything else, correct? This isn't really desirable since the outcome with the highest probability, might still have very low absolute probability.
No, not at all, what you are talking about is called the mode of the distribution. Why don't you look at the links in my post?
And the equation. I don't see how it's different than the mode. Even the graphs show it as being the same: 1 2.
Think about a bimodal distribution, for example. But in any case, we're talking about M-estimates, weren't we?
Among other issues with aiming for the middle of the road, I suspect that a Pascal's mugger who knows that you go for median (or, more generally, x-percentile by count) expected utility will be able to manufacture an offer where the median utility offer is makes you give in, just like the maximum utility offer does.
I bet a median-utility maximizer can be exploited. But I don't believe one can be exploited by a Pascal's mugging. What makes a Pascal's mugging a Pascal's mugging is that it involves a very low probability of a very large change in utility.
Do you believe that the 99.999-percentile by utility-ordered outcome count can be Pascal-mugged? How about 90%? Where is the cut-off?
I'm not sure this is a useful question. I mean, if you choose the (1-p) quantile (I'm assuming this means something like "truncate the distribution at the p and 1-p quantiles and then take the mean of what's left", which seems like the least-crazy way to do it) then any given Pascal's Mugging becomes possible once p gets small enough. But what I have in mind when I hear "Pascal's Mugging" is something so outrageously improbable that the usual way of dealing with it is to say "eh, not going to happen" and move on (accompanied by a delta-U so outrageously large as to allegedly outweigh that), and I take Houshalter to be suggesting truncating at a not-outrageously-small p, and the two don't really seem to overlap.
There is reason to believe that "expected amount of reproductions" is more aligned with natural selection than most other candidates. However organisms can't directly decide to prosper. They have to do it via spesific ways. That is why a surrogate is expected. You can't say that utility maximization would be a bad surrogate as it is almost defined to be the best surrogate. Now that doesn't mean that what you cognitive ritual calls calls utility need to correspond to actaul utility but it doesn't destroy the concept.
In an infinite world, expected reproductions would be a good thing to maximize. An organism that had 3^^^^3 babies would vastly increase the spread of it's genes, and so it would be worth taking very very low probability bets. But in a finite world all such bets will lose, leaving behind only organisms which don't take such bets, in the vast majority of worlds.
Not quite, such an organism is likely to devastate its ecosystem in one generation and die out soon after that.
a reason why any amont of sustainable growth is preferable to a large oneshot.
Your argument seems to use expected amount of copies to argue in favour of forgetting about expected amount of copies. In a way this is illustrative, an organism that only cares about sex but not about defence is more naive than one that sometimes forgoes sex to meet defence needs. But in a way the defence option provides for more copies. In this way sex isn't choosing to make more copies, it is only one strategy path to it that might fail. Arguing about finiteness is like knowing the maximum size of bets the universe can offer. But how can one be sure about the size of that limit? There is althought an argument that a species that has lived a finite time will have only finite amount of evidence and thus a limit on certainty that it can archieve. There are some propositions that might exceed this limit. However using any probability analysis to solve how to tune your behaviour to these propositions would be arbitrary. That is there is no way to calculate unexpected utility and expected utility doesn't take a stance on what grounds you expect that utility to take place.
It seems one problem with using median is that the result depends on how coarsely you model the possible outcomes. E.g. suppose I am considering a bus trip: the bus may be on time, arrive early, or arrive late; and it may be late because it drove over a cliff killing all the passengers, or because it caught fire horribly maiming the passengers, or because it was stuck for hours in a snowstorm, or because it was briefly caught in traffic. With expected utility it doesn't matter how you group them: the expected value of the trip is the weighted sum of the expected value of being late/on-time/early. But the median of [late, on time, early] is different from the median of [cliff, fire, snowstorm, traffic, on time, early]

It seems that watching talkative sports fans watch sports might be a big opportunity to observe that bias that makes people evaluate bad and good properties in a lump, the Affect Heuristic. And that sports like biathlon are more handy than, say, football, since they give rapid binary updates (for the shooting), and almost-binary (?) ones for the running. And you can control for variables like 'country', etc. What do you think?

I have devised (automatically, I have just let it grow) an algorithm, which enlists all the leap years in the Gregorian calendar using the Cosine function. Scraping the ugly constants of 100 and 400.


I'm having difficulty envisioning what problem this solves. Leap years are already defined by a very simple function, and subbing in a cosine for a discrete periodicity adds complexity, does it not?
I think (although Thomas leaves it frustratingly unclear) the point is that this algorithm was discovered by some kind of automatic process -- genetic programming or something. (If Thomas is seriously suggesting that his algorithm is an improvement on the usual one containing the "ugly constants" then I agree that that's misguided.)
Last line of the article explains the motivation:

Having an algorithm fit a model to some very simple data is not noteworthy either. It's possible that the means by which the "pure mechanical invention" was obtained are interesting, but they are not elaborated on in the slightest.