Previous Open Thread

You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should start on Monday, and end on Sunday.

4. Open Threads should be posted in Discussion, and not Main.

201 comments, sorted by Click to highlight new comments since: Today at 11:24 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]8y 37

After HeartBleed, I got really irritated at how much time it took to hunt down the "change password" links for all the services I used. So, in the name of fighting trivial inconveniences, I made a list of direct password-and-two-factor-updating links for various popular services: UpdateYourPasswords.

This is beautiful and really useful seeming. I'm happy it exists, so thanks for making it.

You're welcome! It doubled as a pedagogical introduction to jQuery, so usefulness all around. Sidenote: I want this to be as useful as possible to as many people as possible, but I'm not sure how to promote it without seeming spammy.
You could "Show HN" if you haven't already; such things are usually appreciated there.
Did this yesterday, but went unseen: [] I get the sense that posting again is frowned upon.

FHI's "ask us anything" thread is on the front page of reddit. Congratulations!

I'm reading Ayn Rand's "The Virtue of Selfishness" and it seems to me that (a part of) what she tried to say was approximately this:

Some ethical systems put false dichotomy between "doing what one wants" and "helping other people". And then they derive an 'ethical' conclusion that "doing what one wants" is evil, and "helping other people" is good, by definition. Which is nonsense. Also, humans can't psychologically completely abstain from "doing what they want" part (even after removing "helping other people" from it), but instead of realising the nonsense of such ethics, they feel guilty, which makes them easier to control.

I don't read philosophy, so I can't tell if someone has said it exactly like this, but it seems to me that this is not a strawman. At least it seems to me that I have heard such ideas floating around, although not expressed this clearly. (Maybe it's not exactly what the original philosopher said; maybe it's just a popular simplification.) There is the unspoken assumption that when people "do what they want", that does not include caring about others; that people must be forced into p... (read more)

I've been reading Pinker's "Better Angels of Our Nature" and it seems to me that people don't need to be psychopaths to have difficulty feeling empathy and concern for other people. If you've read HPMOR, the villagers that used to enjoy cat burning are a good example, which Pinker uses. He suggests that our feelings of empathy have increased over time, although he's not sure for what reason. So earlier, a couple of people in their better moments might have claimed caring about others was important, but generally people were more selfish, so that the two did become out of sync.

I mean, even today when you say you care about other people, you don't suddenly donate all of the money that isn't keeping you alive to effective charities, because of the empathy you don't feel with every single other person on this earth. You don't have to be a psychopath for that happen.

This reminds me of this part from "The Failures of Eld Science []": Maybe, analogically, it would be wise to regard the former civilizations as psychopaths, although they were not. This includes religions, moral philosophies, etc. The idea is that those people didn't know what we know now... and probably also didn't feel what we feel now. EDIT: To be more precise, they were capable of having the same emotions; they just connected it with different things. They had the same chemical foundation for emotions, but connected them with different states of mind. For example, they experienced fun, but instead of computer games they connected it with burning cats; etc. (Of course there are differences in knowledge and feelings among different people now and in the past, etc. But there are some general trends, so if we speak about sufficiently educated or moral people, they may have no counterparts in the past, or at least not many of them.)
Funny, this is a decent summary of an idea I've had kicking around for a while, though framed differently. A more or less independent one, I think; I've read Rand, but not for about a decade and a half. I'd also add that "helping people" in this pop-culture mentality is typically built in a virtue-ethical rather than a consequential way; one is recognized as a good person by pattern-matching to preconceived notions of how a good person should behave, not by the expected results of one's actions. Since those preconceptions are based on well-known responses to well-known problems, a pop-culture altruist can't be too innovative or solve problems at too abstract a level; everyone remembers the guy that gave half his cloak to the beggar over the guy that pioneered a new weaving technique or produced an unusually large flax crop. Nor can one target too unfashionable a cause. Innovators might eventually be seen as heroes, but only weakly and in retrospect. In the moment, they're more likely to be seen neutrally or even as villains (for e.g. crowding out less efficient flax merchants, or simply for the sin of greed). Though this only seems to apply in certain domains; pure scientists for example are usually admired, even if their research isn't directly socially useful. Same for artists.
Yes, even when the "generally seen as good" actions are predictably failing or even making things worse, you are supposed to do them. Because that's what good people do! And you should signal goodness, as opposed to... uhm, actually making things better, or something.
IOW “Typical Mind and Disbelief In Straight People [] ” but s/straight/good/?
Exactly. This pattern of "taking something unbelievable other people said, and imagining what would it mean if from their point of view it made complete sense literally, even if it creates an impolite ad-hominem argument against them" probably has a potential to create many surprising hypotheses. It probably needs some nice short name, to remind people to use it more often.
I do read philosophy, and this does seem like a strawman to me. I'm not aware of a single serious moral philosopher who believes there is a sharp dichotomy between "doing what you want" and "helping others". The only philosopher who comes close, I think, is Kant, who thought that the reasons for performing an action are morally relevant, above and beyond the action and its consequences. So, according to Kant, it is morally superior to perform an act because it is the right thing to do rather than because it is an act I want to perform for some other reason. Given this view, the ideal test case for moral character is whether a person is willing to perform an act that goes against her non-moral interests simply because it is the right thing to do. But this still differs from the claim that altruistic behavior is opposed to self-interested behavior.
I also read some philosophy, and while the dichotomy between doing what you want and helping others isn't often stated explicitly, it's common to assume that someone who is doing what they want is not benevolent and is likely to screw people over. Mainly it's only the virtue ethicists who think that egoists would be benevolent.
Well, no. For example, I care very much about these pebbles right here (these represent my friends), and recognize that there are many other people who don't care about these pebbles and instead care about totally different pebbles I don't care either way about. And some other people I know care about some of my pebbles, but not the rest, and I care about some of theirs but not the rest. It occurs to me that if there were a broad set of principles everyone agreed to which said that, ethically, all pebbles ought to be sorted, then everyone would care some about my pebbles, at the comparatively low cost for me of caring a little about other people's pebbles. Of course, from there it's a short step to people who conclude that, ethically, it is best to disregard your own particular attachment to your personal pebbles and be an effective pebblist, taking whatever actions most effectively sort pebbles anywhere even if that means your own pebbles are less sorted than they could be if you devoted more time to them. And some people take that too far and provoke Rayn And Pebblist to promote focusing on your own pebbles to the exclusion of all else.
The way out of this paradox is that no one wants to promote X themselves, but they want other people to do it.

I have recently discovered a technique called "ranger rolling" which has proven ridiculously useful in dealing with my clothing. It basically allows you to turn each item of clothing into an individual block, which you then use to play real life Tetris. This is a much better system than treating them as stacks of paper (which is what happens when you fold them) or as amorphous blobs (which is what happens when you shove them into drawers however you can).

I've never heard it called that, but I roll most of my clothes when traveling for work. They end up less wrinkled and you can fit a lot into a small volume. I highly recommend it.
Looks interesting, but I'm assuming this doesn't work if I like to iron my clothes before storing them. Is that right, or does the rolling not majorly detract from the ironing?

Looks interesting, but I'm assuming this doesn't work if I like to iron my clothes before storing them. Is that right, or does the rolling not majorly detract from the ironing?

I don't iron my clothes before storing them, so I couldn't tell you, but surely this is an opportunity to practice the virtue of empiricism? Iron a couple of shirts, carefully roll them, leave them for a day or two, and check how the wrinkling compares to your usual method of storage. Then share your results for goodwill and karma.

Asking is also a virtue.

An idea: prestige-based prediction market.

Prediction markets (a.k.a. putting your money where your mouth is) are popular among rationalists, but kinda unpopular with governments. It is too easy to classify them as gambling. But if we remove the money, people have less incentive to get things right.

But there is also a non-monetary thing people deeply care about: prestige. This can't be used on a website with anonymous users, but could be used with famous people who care about what others think about their opinions: journalists or analytics. So here is a plan:

A newspaper (existing or a new one) could make a "Predictions" section, where experts would be asked to assign probabilities to various outcomes. If they guessed correctly, they would gain points; if they guessed incorrectly, they would lose points. The points would influence their position on the page: Opinions of predictors with more points would be at the top of the page (with larger font); opinions of predictors with less points would be at the bottom (with smaller font). Everyone starts with some given number of points; if someone drops below zero, they are removed from this newspaper section, forever. And a new p... (read more)

So you are suggesting a system that relies on the opinions of people who got selected because they really want to see their names at the top of the page in big font?

If the only way to see their names at the top of the page in big font is to provide correct predictions... why not?

The classical prediction market relies on opinions of people who got selected because they really wanted to make money on the prediction market. What's the big difference?

Okay... I can imagine that if someone's goal is to bring attention to themselves, they might make correct predictions to get to the top, and then intentionally make shocking (incorrect) predictions to bring even more attention to them. Kinda like people with too much karma sometimes start trolling, because, why not.

Money is a MUCH better motivator. In particular, making predictions is not costless. To consistently produce good forecasts you need to commit resources to the task -- off-the-cuff opinions are probably not going to make it. Why should serious people commit resources, including their valuable time, if the only benefit they get is seeing their name in big letters on top of a long list?
Well, this is something that can be tested experimentally. It could be statistically tested whether the results of the top predictors resemble random noise. Some people spend incredible amounts of time on internet, reading about stuff that interests them. I can imagine they could make good predictions in their area. (And there should be a "no vote" option for questions outside of their area.) Wikipedia exists, despite it doesn't pay its contributors, unlike other encyclopedias. And there is some good stuff there. Also bad stuff... but that's what the competition between predictors could fix. There is probably a limit on how difficult things can be predicted. But it could be higher than we imagine. Especially if the predictions become popular, so for many topics there would be predictors whose hobby it is. There are some technical details to solve, e.g. whether the predictor's prestige will be global, or topic-dependent. (To prevent people from systematically giving 10 great prediction in topic X, and then 1 bad but very visible in topic Y.) But that's like having multiple StackExchange accounts.
Pundits already make predictions all the time, but no one scores them. Even ones that where the outcome is very clear, like finding a nuclear program in Iraq or Apple making a TV. So I think it is important to identify what the problem is and make sure you are actually addressing it. Setting up special venues has some advantages, like making sure that the questions are precise and judgeable, and attracting the right kind of people. Prestige for pundits is basically venue. Moving up and down inside a venue is a pretty small scale of change. I suppose that it is possible that if your venue has lots of churn, then the stakes would be higher. One venue for realish money celebrity predictions is Long Bets [], but while the emphasis is on accountability, it sure isn't on building a track record. Also: does any government other than USA object to prediction markets? I suppose that ipredict's [] bankroll limitations indicate an objection by NZ, but I'm skeptical that this is a significant limiting factor.
There is a site called [] that tracks the predictions of pundits. Personally, I don't think it's all that interesting, in large part because the most concrete testable predictions pundits make are in areas not particularly interesting to me. But at least someone is trying.
[-][anonymous]8y 14

Does anyone have a good grasp of the literature on the relationship between drinking and intelligence?

Corretational or causational e.g. how it affects intelligence or how much intelligent people usually drink?
I'm interested in the causal aspects to help me decide how much I should be drinking.
Smart people are less likely to abstain [] from drinking (search for the word "floored"). I suspect that the quantitative trend is driven by the choice to drink or not and thus the correlation, even if were causal, is not relevant to the question of how much to drink.
Are you just asking how much drinking will make you stupider, long-term?
Yes. I presume it does make you stupider?
Not that I know of, at least in reasonable amounts (where "reasonable" is defined as not causing clinical-grade medical symptoms, like a failing liver). I haven't seen any evidence that moderate drinking lowers IQ. And if your drinking is immoderate, cognitive effects are probably not what you should be worried about.
I see. Google tells me that smarter people tend to drink more. Would I be right in assuming that this doesn't mean I should drink to get smarter?

Yes, you would be right. I don't think drinking helps with IQ -- it's mostly used as a stress reliever and a social lubricant, in which roles it functions well.

High certainty assigned to binge drinking causing some brain damage (if you have a hangover, you definitely binge-drank) via a combination of toxicity and depletion of key resources. Low certainty assigned to moderate drinking possibly having protective effects on the aging brain via its blood thinning properties preventing stroke. Medium certainty assigned to absolutely no protective or beneficial effects for moderate drinking in youth (beyond fun and social benefits). Medium certainty assigned to the notion that for youth, the major drawback of moderate alcohol consumption is the risk of physically injuring yourself while intoxicated, not the actual toxicity. I really doubt there is significant damage associated with moderate drinking in youth. With the number of studies that have been done on this, if there were a huge noticeable difference in brain structure and function we would have found them by now. However, I do think it will damage you at least a little bit.
There have been a couple of studies which say that, but I believe meta-analysis says the opposite: even moderate drinking is associated [] with increased rates of ischemic strokes (not just hemorrhagic). The only cause of death reduced among moderate drinkers is ischemic heart disease.

Question: what are the norms on showing up to meetups for the first time? I happen to be in Berkeley this week, and since there's a meetup this evening I thought I might check it out; should I just show up, or should I get in touch with the organizers and let them know I'm coming/around?

I predict that the answer will be something like "new attendees are welcome, announced or otherwise, but {insert local peculiarity here, e.g. 'Don't worry about the sign that says BEWARE OF THE LEOPARD, we're just getting ready for Towel Day'}". However, enough of my probability mass is elsewhere that I thought I'd check. Also, I couldn't find a definitive statement of the community norms within reach of Google, so I thought change that by asking reasonably publicly.

As someone who has been on both sides, "just show up and introduce yourself" has been good every time so far.
That'll work fine at my local meetup. One tip: If there's a mailing list / Google Group / group / etc., get on it, so you can see topic announcements and contact info for the organizers.
[-][anonymous]8y 12

What happened to the plans of creating more thematic subforums? Is anyone who's in charge willing to implement them?

It doesn't seem like the webmasters, or administrators, of Less Wrong receive these requests as signals. Maybe try sending them a private message directly, unless the culture of Less Wrong already considers that inappropriate, or rude.
As a first approximation, if you want the LW codebase changed, you need to do it yourself.

The Person of Interest TV show is apparently getting pretty explicit about real-world AGI concerns.

With Finch trying to build a machine that can predict violent, aberrant human behavior, he finally realized that the only solution was to build something at least as smart as a human. And that’s the moment we’re in right now in history. Forget the show. We are currently engaged in an arms race — a very real one. But it’s being conducted not by governments, as in our show, but by private corporations to build an AGI — to build artificial intelligence roughly as intelligent as a human that can be industrialized and used toward specific applications.

...I’m pretty confident that we’re going to see the emergence of AGI in the next 10 years. We have friends and sources within Silicon Valley — there is currently a headlong rush and race between a couple of very rich people to try to solve this problem. Maybe it will even happen in a way that no one knows about; that’s the premise we take for our show. But we thought it would be a fun idea that the Manhattan Project of our era — which is preventing nuclear terrorism, that’s the quiet thing that people have been diligently working on for 10

... (read more)
One thing in the show that I see very rarely outside of LW is the AI taking over a person.
So I watched the first episode a while back and it seemed like they have an AI that models the world so well that it knows what's going to happen and who is involved. Maybe I missed something, but if it can tell what's going to happen, why can't it tell the difference between the one responsible for the bad thing happening and the victim?
I feels there's someone really competent behind the show, because your concern is addresed. Spoiler alert (not too much, but still): GUR TBBQ GUVAT VF GUNG VG PNA. UBJRIRE SVAPU AB YBATRE PBAGEBYF GUR ZNPUVAR, NAQ AB YBATRE PNA PBZZHAVPNGR JVGU VG. FB UR YRSG N IREL GVAL ONPXQBBE SBE UVZFRYS, V.R. GUR FFA UR ERPRVIRF ERTHYNEYL.
[-][anonymous]8y 8

Throwing a half-formed idea out there for feedback.

For the past few decades, or perhaps even centuries, it seems younger people have consistently been right in their position on social issues. Perhaps society should take that into account, and weigh their opinions more heavily. Right now, this would mean that gay marriage, marijuana legalization, abortion, etc would all very quickly become legal (In the US at least).

Possible counterarguments:

  1. Younger people haven't been right, they merely won the demographic battle and had their way. Current norms are wor

... (read more)

I think there's some truth to your counterarguments 1 and 2. Young people are easier to sway into any change-oriented movement, so any push for sweeping change will have a lot of youth behind it, even if it's an older person pulling the strings and reaping the benefits.

It was the youthful Red Guards who were guilty of the worst Cultural Revolution atrocities, and Pol Pot's regime was even more reliant on adolescent murderers killing everyone who had criminal traditional values or had received a traditional education.

In contrast, Deng Xiaoping was over 70 years old when he instituted his post Cultural Revolution reforms.

Aside from teaching basic mathematical skills and literacy, the major goal of the new educational system was to instill revolutionary values in the young. For a regime at war with most of Cambodia's traditional values, this meant that it was necessary to create a gap between the values of the young and the values of the nonrevolutionary old.

The regime recruited children to spy on adults. The pliancy of the younger generation made them, in the Angkar's words, the "dictatorial instrument of the party."[citation needed] In 1962 the communists had created a

... (read more)
[-][anonymous]8y 12

I'm Against Moral Progress. I don't think moral progress the way we usually talk about it is well founded. We observe moral change, then we decide since past moral change made values ever more like our present values on average, something that is nearly a tautology, the process itself must be good, despite us having no clear understanding of how it works.

Such confusion fogs many people on a similar process, evolution, having noticed they like opposable thumbs and that over time past hominids have come to resemble present hominids ever more they often imagine evolution to be an inherently good process. This is a horribly wrong perception.

Younger people haven't been right, they merely won the demographic battle and had their way.

Young people in general are good at picking winners, and quickly adapting to what is popular. Younger people's status quo bias will also fixate on newer norms compared to older people with aliefs the status quo is something else. Winners will also tend to try and influence them, especially in our society where voting power and public opinion grant legitimacy.

Younger people haven't been right, but despite being a young person who has over the past 3 ye... (read more)

For the past few decades, or perhaps even centuries, it seems younger people have consistently been right in their position on social issues.

I guess you're not from a country that had Stalin Youth around in the 1970s. (We weren't an Eastern Bloc country either, they were just useful idiots.)

[-][anonymous]8y 12

1970s intelligent young American students at Harvard favored the the Khmer Rouge.

Since the U.S. incursion into Cambodia in the spring of 1970, and the subsequent saturation-bombings The Crimson has supported the Khmer Rouge in its efforts to form a revolutionary government in that country. …

In the days following the mass exodus from Phnom Penh, reports in the western press of brutality and coercion put these assumptions into doubt. But there were other reports on the exodus. William Goodfellow in the New York Times and Richard Boyle, the last American to leave Phnom Penn in the Colorado Daily reported that the exodus from major cities had been planned since February, that unless the people were moved out of the capital city they would have starved and that there was a strong possibility of a cholera epidemic. The exodus, according to these reports, was orderly; there were regroupment centers on all of the major roads leading out of Phnom Penh and people were reassigned to rural areas, where the food supplies were more plentiful.

There is no way to assess the merits of these conflicting reports—and if there were instances of brutality and coercion, we condemn them—but the goals of

... (read more)
The idiocy of the former group seems greater to me, because there the horrors have happened geographically closer (should switch them more to "near" mode), and they had enough time to learn about what happened (enough evidence). EDIT: On the other hand, the former group had a realistic chance to become the new leaders, while the latter praised someone who would kill every single one of them. But otherwise, both are examples of: "yeah, seems that millions have died horribly, but our beliefs that our role models are the good guys remain unshaken."

For the past few decades, or perhaps even centuries, it seems younger people have consistently been right in their position on social issues.

Taboo "right."

Alternatively: There is no "right and wrong", it's a subjective value judgement, and similar to #1, they ultimately are the one's who's subjective values are taken to be objectively "right".
This gallup report [] suggests that views on abortion are more complicated. Young people are most likely to favor no restrictions on abortion, but also most likely to favor a categorical ban (even more likely than the 65+ crowd).
i.e., young people are most likely to have the least complicated views on abortion!
In the US, under-30 adults have less liberal views on abortion [] than middle-aged adults, and the under-30s were getting less liberal about it more quickly than older adults until 2010 or so [] . (Also, abortion's been legal in the US for four decades.)
What were the equivalents of marijuana legalization and same-sex marriage 20 years ago? 40 years ago? Etc. And what policies did young people support that weren't enacted?
Let's see. In terms of youth subcultures, 1994 would be a little after grunge had peaked; punk would have been on its way out as a mass movement rather than a vestigial scene, but it still had some presence. Rage Against The Machine was probably the most politicized band I remember from that era, although it wasn't linked to any particular movement so much as a generalized morass of contrarian sentiment. Anti-globalization wouldn't peak for another five years, but it was picking up steam. Race relations were pretty tense in the aftermath of the 1992 Rodney King riots. Tibetan independence was a popular cause. I also remember environmentalism having a lot of presence -- lots of talk about deforestation, for example. I don't remember much in the way of specific policy prescriptions, though. Bill Clinton had just been elected, and I think he introduced his health care reform plan about that time. That one failed, but I don't remember it showing the same generational divisions that marijuana legalization and same-sex marriage now do. 'Course, I could be wrong; I was pretty young at the time.
A whole lot of things having to do with race, I believe. This is what I don't know, and would like to pick LW's brains for.
Nuclear disarmament, Non-Aligned countries, decolonization of Africa, ending the Vietnam War, and stopping the red scare witch hunts.
Are you saying that these were distinctively supported by young people? If so, I'm skeptical that "stopping the red scare witch hunts" falls in that category, at least if you mean McCarthyism. The others seem more reasonable to me.
One of these things is not like the others ...
Ending the Vietnam War? Although young people in 1969 reported being more likely to sympathize with anti-war demonstrators' goals [], they were generally less likely [] to call the war a "mistake", at least between 1965 & 1971.
Which one? I mean, superficially, the second option on the list ("Non-aligned countries") is not actually a policy proposal, but I'm assuming the charitable reading is something like "Support for non-alignment". Is that what you meant, or something else?
Decolonization of Africa.
I don't get it. In what respect is that not like the others?
It wasn't a good outcome.
Neither was ending the Vietnam war. For that matter, did the Non-Aligned movement accomplish much of anything besides providing cover for various dictators?
I considered those two options but figured that they're more muddled cases. Africa decolonization was clearly an order of magnitude worse than the worst-case interpretations of Vietnam+Nonaligned.
This kind of feels like suggesting "if you notice that your tribe is becoming extinct, you should help to speed up the process".
[Citation needed] Not to mention that you treat "younger people" as a homogenous group which, quite clearly, it is not.
Of course. The observation is that different demographics show markedly different attitudes on social issues, and that one such demographic seems to have a tendency to get things right. There are many possible counterarguments, but I am not convinced that the basic idea is unworkable.
Let me offer you an alternate explanation. One thing about which I feel pretty safe generalizing is that the youth has considerably higher risk tolerance than the elderly. A consequence of that is that the young will actually go out and try all the ideas which swirl around any given culture at any given time. Most will turn out to be meh, but some will turn out to be great and some -- horrible. Fast-forward about half a century and you know what? The elderly very clearly remember how, when young, they supported all the right ideas and very thoroughly forget how they supported the ideas which now decorate the dustbin of history. Rinse and repeat for each generation.
Solvent offered this hypothesis as #2.
Yes, and I suggest a plausible mechanism for that.
I'm just saying that your first hostile comment is inappropriate and calling it "alternative" is misleading.
This can be tested. Organize a huge youth conference that will provide dozens of new ideas. Record the ideas, wait 20 years, and review them; wait 50 years and review them yet again. Also, compare with new good things that happened meanwhile, but were not suggested at the conference.
I suggest trying to find evidence about issues that made a larger difference, such as support for Mao or for fighting major wars. Maybe there's a principled definition of "social issues" that excludes things about which the young are wrong, but I'll guess that it's hard to find consensus about such a definition.
It only seems this way because of selection bias. Young people generally want to change stuff, and history/ethics is written by the victors. In cases where young people lost, the status quo was maintained, which means we don't pay as much attention.
I think, in practice, young people would have to start voting more in order to have their opinions reflected in politics. Voter turnout among young adults is very low, so when politicians make decisions, they feel like they can safely ignore their concerns.

So recently I've been philosophizing about love, sex, and relationships. I'm a man, and I experience lust much more often than love. However, it seems like long-term relationships are better than short-term relationships for a variety of reasons: consistently having sex through short-term relationships as a man requires that you spend a lot of time chasing women, and I've read about many mental health benefits that come with being in a loving relationship that I assume don't come in to play if you're only having short-term relationships.

I'm a outgoing, m... (read more)

In my experience, the primary factor that generates love is the amount of time I think about positive qualities in a woman. It's a fairly simple and surprising hack of my brain that could very well work for you too: if I devote enough time (say, a couple of minutes every hour) thinking about what I like in a woman, my brain will automatically start to generate feelings of love. On the converse, when I feel a lot of attraction, if I consciously stop thinking about her, the feelings intensity diminishes drastically. It's almost like the brain was seeking to maintain internal consistency: if you think a lot about someone you're attracted to, it must be because you're in love, and viceversa. The best part is that, since we don't have access to our internal workings, the feelings generated in this way feels very true (they are true, actually). My advice then is: before trying more invasive hack of your brain, just devote some time to regularly think about what you like in her. There's a good chance that soon you'll start idealizing her.
One of the ways of building intimacy or closeness, which is a key component of companionate love (the type you seem to be going for here, have a look at the research on passionate vs. companionate love if you're interested) is self-disclosure that is responded to by one's partner with warmth, understanding and supportiveness. You can spend a lot of time doing things together without having this self-disclosure: to get it, you need to want to disclose/hear more about the other person, and preferably have dates etc. where you spend some time just talking about whatever, in private, about your pasts or your thoughts - things that might lead to self-disclosure. So first step, set up these situations. Second step, talk about your past and your thoughts and try to be open - be trusting. Relate random conversations to things you hold close to you. Third step, if your partner opens up to you, make sure to respond supportively and engage with it, and not brush it off or turn the conversation to less close topics. Which is not to say you should do this all the time, fun dates and silliness and dancing in a club way too loud to talk in are good too. But with any luck, adding a bit more of this in will help you feel that connection and intimacy.
There is a thin line between changing your desires and suppressing them. You may replace a goal X by a goal Y, or you may just convince yourself that you did. -- Think about all the religious homosexual people who try to hack themselves to heterosexuality, believe they have succeeded, and then later realize it didn't work. Is there a way to get both X and Y? For example having an open long-term relationship with one partner, and a few short-term relationships when you need them. Or to save time and effort, to have one long-term emotional relationship, and a few long-term contacts where both sides only want sex occasionally.

This writing style has become a cliche:

Imagine a hypothetical situation X. In situation X, you would surely agree that we ought to do Y. Surprise! We actually are living in hypothetical situation X. So we obliged to do Y.

Are there any ways to make it less obnoxious?

Why do you feel that it comes across as obnoxious?
I am not Omid, but: It feels like an attempt to sneak something past the reader -- and indeed that's clearly what it is. The writer might defend it along the following lines: "Few readers would listen if simply told to do Y, because it's very different from what we're used to doing. But, in fact, principles we all accept mean that we should all be doing Y. So I need a way of explaining this that won't get blocked off at the outset by readers' immediate aversion to the idea of Y." Which is fair enough, but if you notice it being done to you you might feel patronized or manipulated. And while we might like to think of ourselves as making moral judgements on the basis of coherent principles, it's at least arguable that really those principles are often less fundamental than the judgements they purport to explain -- so the reader's actual conclusion might be "... so I need to revise my moral principles somehow" rather than "so I need to do Y", but the argument is phrased in a way that rather bypasses that conclusion. Having said all which, I'll add that I don't think I myself find that sort of thing obnoxious; while I can't think of any case in which I've done the same myself, I can imagine doing and don't feel any guilt at the thought; and I think that even if our moral principles are derived from reflection on more specific moral judgements rather than vice versa, it's reasonable to give precedence to such a principle even when a specific judgement turns out to conflict with it. So I don't altogether agree with the argument I'm putting in Omid's mouth. (Which is probably reason to doubt whether I've correctly divined his meaning. Omid, please correct me if my guess about what you find obnoxious is wrong.)
Ah, makes sense. I wonder if replacing "Surprise! We actually are living in hypothetical situation X." with "If we were living in X, how would we tell?" would be better.
Say what you're about to do, before doing it. Sometimes, providing a summary will activate an immune response. This is a red flag about the context in which you're communicating, but that doesn't mean you shouldn't participate in an effective way.

There's a woman that has recently started to treat me poorly, and I can't figure out why. I would like help in designing the most efficient social experiment that helps me to solve this riddle.
If it's not clear from the disclaimer above, this post is about a personal situation and contains details about the two persons involved and their feelings.

Some possibly useful background info: it's about a year that we dance regularly together. I like her a lot and some months ago I have told her so, trying to be as level-headed as possible. She replied that she is ... (read more)

Here's an intervention, rather than a test: If she says something that hurts your feelings again, just say, "I know you're joking around, but that kind of hurts my feelings."

Instead of informing your model, inform hers.

That is a simple and worthwhile point of view. It made me change my mind, as per comment above, so upvoted!
If it were me, I would just assume she was lightheartedly teasing. If that's the case, the course of action would be to tease back, but also in a lighthearted way. Either that, or reply with an extremely exaggerated form of self-deprecation; agree with her teasing but in a way that exaggerates the original intent. Even if that's not the case, and she's being vindictive, I think responding as though she were teasing would be ideal anyway. Examples: "I tripped and almost fell on you. Oh but you would be happy if I accidentally fell on you, right?" (tease back): "Clumsy people don't really do it for me" (exaggerate): "That's because I have never had a woman touch me before in my life" "Oh no, you're going to need a triple X size." (tease back): "I think you just like saying 'triple X'. Get your mind out of the gutter, thanks" (exaggerate): "I'm going to cry myself to sleep over my size tonight " If she laughs and/or plays along with these responses, she's probably just teasing. If she gets even more cruel in her response, then she's probably being intentionally vindictive.
I'll implement the 'tease back' strategy, plus I will also mention that I've noticed that she's treating me worse than usual lately. This way I'll gather intel both from her emotional and logical reactions, and will try to make up a single model of the situation.

I am far from an expert in these matters, but would advise against both teasing back and saying explicitly that you interpret the teasing as "treating me worse than usual".

[EDITED to add: To be clear, I mean "don't do both of these together" rather than "both of these are individually bad ideas".]

Why not both? What could go especially wrong?
Because one is playful and the other feels hostile. Doing both at once won't give you a clear sense of what her response is to either. Do them in separate encounters.
Why is teasing back a bad idea?
Apparently even with my edit I wasn't clear enough. Letting A be "tease back" and B be "mention that she seems to be treating you worse recently", I wasn't saying * "don't do A, and don't do B" but was saying * "don't both-do-A-and-do-B".
If you ask her a direct question, I would take into account that this would more than likely engage her press secretary [] and might not get the logical answer you are looking for.
Yeah, I explained myself poorly. By 'logical' I meant the 'rationalized' explanation. It should at least tell me if she's aware of the behaviour or not.
Really? Because if someone told me I wasn't treating them well, I would apologize and make nice regardless of whether I'd been doing it intentionally. I think you are overestimating how well confronting her will work to inform you. Think about (ahead of time) what response(s) you'd expect if it were all a misunderstanding and what response(s) you'd expect if it were deliberate. If there's a lot of plausible overlap between the two worlds, you won't learn very much, but you may make the whole thing more awkward by drawing attention to it.
I think you're right: telling her is not especially informative, plus would surely modify her model of me and muddle the waters even more (I forgot to apply the principle that you disturb everything you measure). I think I'll just tease her back and resort to telling her if and only if this escalates in a bad direction.
5Ben Pace8y
Y'know egocentric bias? Where people think the world revolves around them more than it does? I find that I often see my friend's actions in terms of what they think of me, but I imagine that they're in fact focused on me a lot less, so I would advise trying to discount that idea strongly. If it bothers you more, then just look at your options e.g. Mention it to her, don't mention it to her, think of an experimental test for the hypothesis, etc. Then pick one. Otherwise... Worrying is of course useless if it isn't motivating a useful action, so attempt not to.
Yes, an experimental test is just what I want to create. That should be the useful action motivating my question. I understand that not discounting for egocentric bias is a form of reduced Pascal's wager: the small chance of a correlation of her behaviour with me specifically has a huge payoff, so I better devote careful effort to discern the probability of this being the case. However, if the pattern continues, I think that the correlation becomes more and more probable.
3Ben Pace8y
I still think that people worry about what other people think of them more than is healthy, which is why I think the egocentric bias fix is important. If you can think of a test, try it if it worries you, but... Well, I don't know. Perhaps I'm Other-Optimising too much.
My probabilities for each scenario: 0.1 - 0.2 - 0.3 - 0.4 - 0.
After adjusting for egocentric bias, I'd say 0.2 - 0.2 - 0.3 - 0.3 - 0, even if this rings extremely wrong to my emotional brain.
In my experience, explicit declarations never work. You need to convey attraction subtextually.
Because of plausible deniability or some other factor? If I'm not mistaken, there were studies that showed we tend to like more people who like us.
[-][anonymous]8y 5

This never occurred to me until today, but can you solve the 'three wishes from a mischievous but rule abiding genie' problem just by spending your first wish on asking for a perspicuous explanation of what you should wish for? What could go wrong?

Asking what you "should wish for" still requires you to specify what you're trying to maximize. Specifying your goal in detail has all the same risks as specifying your wish in detail, so you have the same exposure to risk. Edit: See my longer explanation below []
You could possibly say "I wish for you to tell me you would wish for if you had your current intelligence and knowledge, but the same values and desires as me." That would still require just the right combination of intelligence, omniscience, and literal truthfulness on the genie's part though.
The genie replies, "What I would wish for in those circumstances would only be of value to an entity of my own intelligence and knowledge. You couldn't possibly use it. And beside, I'm sorry, but there's no such thing as 'your values and desires'. You're barely capable of carrying out a decision made yesterday to go to the gym today. You might as well ask me to make colourless green ideas sleep furiously. On a more constructive note, I suggest you start small, and wish for fortunate chances of a scale that could actually happen without me, that will take you a little in whatever direction you want to go. I'll count something that size as a milli-wish. Then tomorrow you can make another, and so on. I have to warn you, though, that still ends badly for some people. Giving you this advice counts as your first milli-wish. See you tomorrow."
I'm nowhere near that confident in my values and desires.
* "You should wish for me to tell you what you would wish for in your place if I had your current intelligence and knowledge, but the same values and desires as you." * "I have replaced your values and desires with my own. You should wish to become a genie." * "Here is your list of all possible wishes." * "You should wish that genies never existed."
Can you explain why? Why couldn't that be exactly what I'm asking for?
"Should" implies a goal according to some set of values. Since the genie is mischievous, it might e.g. tell you what you should ask for so as to make the consequences maximally amusing to the genie.
Do you mean this as an empirical claim about the way we use the word? I think it's at least grammatical to say 'What should my ultimate, terminal goals be?' Why can't I ask the genie that?
Taboo the word "should" and try to ask that question again. I think you'll find that all "should"-like phrases have an implicit second part-- the "With respect to" or the "In order to" part. If you ask "what should I do tomorrow?", the implicit second part (the value parameter) could be either "in order to enjoy myself the most" or "in order to make the most people happy" or "in order to make the most money" You will definitely have to specify the value parameter to a mischievous genie or he will probably just pick one to make you regret your wish. It seems that you're asking for a universal should which assumes that there is some universal value or objective morality.
Well, I'm interested in your answer to the question I put to Kaj: are you making a linguistic claim here, or a meta-ethical claim? I take it that given this: ...that you're making the meta-ethical claim. So would you say that a question like this "What should my ultimate, terminal goals be?" is nonsense, or what?
Not complete nonsense, its just an incomplete specification of the question. Let's rephrase the question so that we're asking which goals are right or correct and think about it computationally. So the process we're asking the genie to do is: 1. Generate the list of all possible ultimate, terminal goals. 2. Pick the "right" one It could pick one at random, pick the first one it considers, pick the last one it considers, or pick one based on certain criteria. Maybe it picks the one that will make you the happiest, maybe it picks the one that maximizes the number of paperclips in the world, or maybe it tries to maximize for multiple factors that are weighted in some way while abiding by certain invariants. This last one sounds more like what we want. Basically, it has to have some criteria by which it judges the different options, otherwise its choice is necessarily arbitrary. So if we look at the process in a bit more detail, it looks like this: 1. Generate the list of all possible ultimate, terminal goals 2. Run each of them through the rightness function to give them each a score 3. Pick the one with the highest score So that "Rightness" function is the one that we're concerned with and I think that's the core of the problem you're proposing. Either this function is a one-place function, meaning that it takes one parameter: rightness_score(goals) => score Or it's a two-place function [], meaning that it takes two parameters rightness_score(goals, criteria) => (score) When I said earlier that all "should"-like statements have an implicit second part, I was claiming that you always have to take into account the criteria by which you're judging the different possible terminal goals that you can adopt. Even if you claim that you're just asking the first one, rightness_score(goals), the body of the function still implicitly calculates the score in some way according to some criteria. It makes more se
Hmm, you present a convincing case, but the result seems to me to be a paradox. On the one hand, we can't ask about ultimate values or ultimate criteria or whatever in an unconditioned 'one place' way; we always need to assume some set of criteria or values in order to productively frame the question. On the other hand, if we end up saying that human beings can't ever sensibly ask questions about ultimate criteria or values, then we've gone off the rails. I don't quite know what to say about that.
I'm not saying you can't ever ask questions about ultimate values, just that there isn't some objective moral code wired into the fabric of the universe that would apply to all possible mind-designs. Any moral code we come up with, we have to do so with our own brains, and that's okay. We're also going to judge it with our own brains, since that's where our moral intuitions live. "The human value function" if there is such a thing is very complex, weighing tons of different parameters. Some things seem to vary between individuals a bit, but some are near universal within the human species, such as the proposition that killing is bad and we should avoid it when possible. When wishing on a genie, you probably don't want to just ask for everyone to feel pleasure all the time because, even though people like feeling pleasure, they probably don't want to be in an eternal state of mindless bliss with no challenge or more complex value. That's because the "human value function" is very complex. We also don't know it. It's essentially a black box where we can compute a value on outcomes and compare them, but we don't really know all the factors involved. We can infer things about it from patterns in the outcomes though, which is how we can come up with generalities such as "killing is bad". So after all this discussion, what question would you actually want to ask the genie? You probably don't want to change your values drastically, so maybe you just want to find out what they are? It's an interesting course of thought. Thanks for starting the discussion.
Wait, why not? How would asking about ultimate values or criteria work, if we need to assume some value or criterion in order to productively deliberate?
Not sure what you're asking. I guess it could be either an empirical or a logical claim, depending on which way you want to put it. Sure it's grammatical and sure you can, but if you don't specify what the "should" means, you might not like the answer. See my above comment.
Let me rephrase my question: Suppose I presented a series of well-crafted studies which show that people often use the word 'should' without intending to make reference to an assumed set of terminal values. I mean that people often use the word 'should' to ask questions like 'What should my ultimate, terminal values be?' Would your reaction to these studies be: 1) I guess I was wrong when I said that '"Should" implies a goal according to some set of values'. Apparently people use the word 'should' to talk about the values themselves and without necessarily implying a higher up set of values. or 2) Many people appear to be confused about what 'should' means. Though it appears to be a well formed English sentence, the question 'what should my ultimate, terminal values be?' is in fact nonsense. In other words, when you say that 'should' implies a goal according to some set of values, are you making a claim about language, such as might be found in a dictionary, or are you making a claim about meta-ethical facts, such as might be found in a philosophy paper? Or do you mean something else entirely?
I endorse the answers that TylerJay gave to this question, he's saying basically the same thing as I was trying to get at.
Do you have a response to the question I put to him? If it's true that asking after values or goals or criteria always involves presupposing some higher up goals, values or criteria, then does it follow from this that we can't ask after terminal goals, values, or ultimate criteria? If not, why not?
Yes and no. You could just decide on some definition of terminal goals or ultimate criteria, and ask the genie what we should do to achieve the goals as you define them. But it's up to you (or someone else who you trust) to come up with that definition first, and the only "objective" criteria for what that definition should be like is something along the lines of "am I happy with this definition and its likely consequences".
And you would say that the above doesn't involve any antecedent criteria upon which this judgement is based, say for determining the value of this or that consequence?
How is it different from what Eliezer calls "I wish for you to do what I should wish for" []?
Maybe it's only trivially different. But I'm imagining a genie that is sapient (so it's not like the time machine...though I don't know if the time machine pump thing is a coherent idea) and it's not safe. Suppose, say, that it's programed to fulfill any wish asked of it so as to produce two reactions: first, to satisfy the wisher that the wish was fulfilled as stated, and second, to make the wisher regret having wished for that. That seems to me to capture the 'mischievous genie' of lore, and it's an idea EY doesn't talk about in that article, except maybe to deny its possibility. Anyway, with such a genie, wishing for it to do whatever you ought to wish for is probably the same as asking it what to wish for. I'd take the second option, because I'm not the world's best person, and I'd want to think over hitting the 'go' button.
I suspect that to be able to evoke this reaction reliably, the 100%-jackass genie [] would have to explicitly exclude the "do what I ought to have wished for" option, and so is at least as smart as a safe genie. I... do not follow at all, even after reading this paragraph a few times.
I agree that it's at least as smart as the safe genie, and I suppose it's likely to be a even more complicated. The jackass genie needs to be able both to figure out what you really want, and to figure out how to betray that desire within the confines of your stated wish. I realize I do this with my son sometimes when he makes up crazy rules for games: I try to come up with ways to exploit the rule, so as to show why it's not a good one. I guess that kind of makes me a jackass. Anyway, I take it you agree that my jackass genie is one of the possibilities? Being smart doesn't make it safe. And, as is the law of geniedom, it's not allowed to refuse any of my wishes. Sorry to be unclear. You asked me how my suggestion was different from just telling the genie 'just do whatever's best'. I said that my suggestion is not very different. Only, maybe 'do whatever's best' isn't in my selfish interest. Maybe, for example, I ought to stop smoking crack or something. But even if it is best for me to stop smoking crack, I might just really like crack. So I want to know what's in fact best for me before deciding to get it.
I think the problem is, 'Mischievous but rule abiding' doesn't sufficiently help limit the genies activities to sane ones. For instance, the genie pulls out a pen made entirely out of antimatter to begin writing down a perspicuous explanation, and the antimatter pen promptly reacts with the matter in the air, killing you and anyone in the area. When the next person comes into the wasteland after it has stopped exploding and says "That's not mischievous, that's clearly malicious!" The genie points out that he can just make them all come back if someone wishes for it so clearly it is only a bit of mischief, much like how many would consider taking a cookie from a four year old and holding it above their head mischievous: Any suffering is clearly reversible. Oh also, the last person asked for a perspicuous explanation of what they should wish for, and it is written down on antimatter paper and antimatter pen in that opaque massive magnetically sealed box, which is just about to run out of power, and then THAT person also blows up when the boxes power containment fails.
That's kind of a cool story, but that genie is I think simply malevolent. I have in mind the genie of lore, which I think is captured by these rules: first, to satisfy the wisher that the wish was fulfilled as stated, and second, to make the wisher regret having wished for that and third, the genie isn't allowed to do anything else. I don't think your scenario satisfies these rules.
Well, that's true, based on those rules. The first person dies before the wish is completed, so clearly he wasn't satisfied. Let me pick a comparably hazardous interpretation that does seem to follow those rules. The Genie writes down the perspicuous instructions in highly Radioactive, Radioluminescent Paint, comparable to that which poisoned people in the 1900's but worse, in a massive, bold font. The instructions are 'Leave the area immediately and wish to be cured of Radiation poisoning.' When the wisher realizes that they have in fact received a near immediately fatal dose of radiation, they leave the area, follow the wish and seem to be cured and not die. When they call out the Genie for putting them in a deadly situation and forcing them to burn a wish to get out of it, the genie refers them to Jafar doing something similar to Abis Mal in Aladdin 2. The Genie DID give them a perfectly valid instructions on a concise wish. Had the Genie made the instructions longer, they would have died of radiation poisoning before reading them and wishing for it, and instructions which take longer than your lifespan to use hardly seem to the Genie to be perspicuous. Is this more in mind with what you were thinking of?
That's certainly a lot closer. I guess my question is: does this satisfy rule number three? One might worry that exposing the wisher to a high dose of radiation is totally inessential to the presentation of an explanation of what to wish for. Are you satisfied that your story differs from this one? Me: O Genie, my first wish is for your to tell me clearly me what I should ask for! [The Genie draws a firearm and shoots me in the stomach] Genie: First, wish for immediate medical attention for a gunshot wound. This story, it seems to me, would violate rule three.
I think I need to clarify how it works when things that are totally inessential are being disallowed, then. Consider your wish for information again: What if the Genie says: Genie A: "Well, I can't write down the information, because writing it is totally inessential to giving you the information, and my wishing powers do not allow me to do things that are totally inessential to giving you the information.... not since I hurt that fellow by writing something in radioactive luminescent paint" Genie A: "And I can't speak the information, because speaking it is totally inessential to giving you the information, and my wishing powers do not allow me to do things that are totally inessential to giving you the information... not since I hurt that other fellow by answering at 170 decibels." Genie A: "And I can't simply alter your mind so that the information is present, because directly altering your brain is totally inessential... you see where I'm going with this. So what you should wish for with your second wish is that I can do things that are totally inessential to the wish... so that I can actually grant your wishes." All of that SOUNDS silly. But it also seems at least partially true from the genie's perspective: Writing isn't essential, he can speak, speaking isn't essential, because he can write, brain alteration isn't essential, etc, but having some way of conveying the information to you IS essential. So presumably, the genie gets to choose at least one method from a list of choices... except choosing among a set of methods is what allowed him to hurt people in the first place. (By choosing a method that was set for arbitrarily maximized mischief) Unless the Genie doesn't get to select methods until you tell him (hence, making those methods essential to the wish, resolving the problem), however, that could lead to an entirely different approach to mischief. Genie B: "Okay: First you'll have to tell me whether you want me to write it down, speak it out l

Would an average year in the life of an em in Hanson's Malthusian explosion scenario really be >0 QALY? Hanson has kinda defended this scenario because the ems would want to be alive but I don't think that means anything. I remember reading about mice and painful wireheading (probably Yvain's post) and how you can make mice want that kind of wireheading even though it's painful. Similarly it's easy to imagine how people would want to live painful and miserable lives.

Has he? I think his more typical defense is Poor Folks Do Smile [].
Yeah, I read that, reconsidered my impression and it seems you are right. My memories about his opinion seemed to have become muddled and simplified from several sources like his Uploads essay [] where he says "Most uploads should quickly come to value life even when life is hard or short, and wages should fall dramatically." (which doesn't seem to be a value statement) that poor folks essay, this discussion here [] (in which he doesn't commentate) and this video interview [] in which he constantly says that life will be okay even though we'll become more and more alienated from nature. But I don't think my view of his opinion was 100% incorrect. The distinction between "valuing your life" and "wanting to live" is interesting. If you want to live, does that automatically mean that you value your life? I mean, I've had days when maybe 95% of the time I've felt miserable, and 5% of the time I've felt okay and in the end I've still considered those days okay. If I want to have more of those kind of days, does that mean I value misery? How do you assess the quality of life in these kind of cases, and in cases where the 'misery' is even more extreme?
In your first paragraph, you agree with me that it isn't a value judgement, but then in your second paragraph, you go back to claiming that it is the foundation of his position. I think it is mainly a response to claims that uploads will be miserable. I think his position is that we should not care about whether the uploads value their lives, but whether we, today, value their lives; but he thinks that moral rhetoric does not well match the speaker's values. cf []
I would guess yes - but that might change depending on details. At the very least, if we decided on some way to measure QALYs (our current methodology is real simple!), and then tried to maximize that measurement, we'd at best get something that looked like pared-down ems. Ultimately, how you choose between futures is up to you. Even if something has an objective-sounding name like "quality-adjusted life years," this doesn't mean that it's the right thing to maximize.
Yes, wanting to live isn't perfect evidence of a life worth living. But it sure looks like it provides some bayesian evidence. Looking at whether the ems want more copies of themselves and want faster clock speeds should provide stronger evidence, and it seems unlikely that ems who don't want either of those will be common. Ems should have some ability to alter themselves to enjoy life more. Wouldn't they use that?
If it provides bayesian evidence, shouldn't there be something that would in principle provide counterevidence? I can't figure out what that kind of counterevidence would be. Can you imagine an em population explosion where at some point no ems would want to make copies of themselves? I've got the impression that once an em population explosion gets started you can't really stop that because those ems that want copies get selected no matter how miserable the situation. Since in this scenario almost all ems work on a subsistence level and there's a huge number of ems, if enjoying life makes them even slightly less productive I don't think that kind of alteration would become very common due to selection effects.
Evidence that most ems are slaves whose copies are made at the choice of owners would seem relevant. Making miserable workers a bit happier doesn't seem to make them less productive today. Why should there be no similar options in an em world?
As I understand it, the premise behind ems is that it's possible to copy human minds into computers, but not to understand the spaghetti code. There won't be an obvious way to just make workers happier.
I expect faster and more reliable evaluations of Prozac-like interventions. I also expect that emotions associated with having few cpu cycles are less strongly ingrained than those caused by lack of food.

Find a problem, if any, in this reasoning:


The only reasoning in there was that wearable tech doesn't make you a cyborg because you're a simulation. I'd say that even if the world is a simulation, there's no reason to go crazy with semantics. You call someone a cyborg when they'd qualify as a cyborg in a non-virtual world.
Wearable tech doesn't make you a cyborg because it isn't part of you? The fact that you scoff at things that sound like nonsense but which includes the prediction that you will scoff is (at best) very weak evidence for it, since scoff is what you would do anyway? Besides, the garbage man doesn't say anything about how he knows about the simulation, if he just made it up, then what him believing it is also unrelated to whether or not it is true.
the "you're a simulation" argument could explain anything and hence explains nothing. He managed to predict scoffing, but that wasn't a consequence of his hypothesis, that was just to be expected.

Meetup posts have started appearing on the RSS feed for lesswrong Main (

I could switch my RSS feed to only include promoted posts, but that would increase the problem of the hiddenness of non-promoted Main posts. Is there something else that I could do, or does this need to be fixed on Less Wrong's end?

Meetups should have its own thread, like the Open thread, not be posted in Discussion. Of course, we do not live in a should universe...
This would lower the visibility of individual meetups, which in turn could lower attendance or the number of newcomers for meetups.
People interested in meetups would check the meetup thread or get notified when it is updated. Really important mega-meetups or inaugural meetups can still be posted in the usual place.

I recently posted on the rationality diary thread about a study deadline / accountability system I've been doing with zedzed. So far it's worked well for me, and I'd be happy to help others in the same way that zedzed is helping me. If anybody wants to use such a system for what they're studying, just ask. Unfortunately for most subjects I can't provide anything more than a deadline and some accountability, since I probably don't know the subject too well.

Also, if anybody else is willing to provide a similar service to the community (and perhaps can even provide some subject-specific guidance), please respond below so that people can contact you.

I'd be happy to provide deadlines or accountability to anyone else who wants it.

I'm having a problem with posting comments to Slate Star Codex-- they're rejected as spam, even though it's the email address I've been using and I haven't included any links. Anyone else having this problem?

Edited to add: whatever it was got fixed.

"Super Rationality Adventure Pals the Saturday morning cartoon! On 1080p from a BitTorrent near you." Please post plotlines and excerpts.

[an old comment I thought I'd revive.]

Conscientiousness: Alice keeps putting off a project, since she knows it'll only take an hour (say, fixing a roof - after all, you only need to start an hour before the rainstorm). Bob just does it. Alice gets rained on. Coonscientiousness: Alice and Bob's town is slowly invaded by a herd of raccoons. Alice looks up how to deal with raccoons, asks for advice, and looks for successful people and copies them. Bob just does what he thinks of first, yelling at them to stay away from his garden, until he gets to tired and raccoons eat all his vegetables.
Please, either 'a gaze of raccoons' or 'a nursery of raccoons'.
The moral of the episode: a group of raccoons is called a "gaze" or "nursery". The More You Know []
Zombie raccoons!
Steal the plotline of "Feeling Pinkie Keen" from My Little Pony and fix the ending so instead of being about taking things on faith, it's about updating on the cumulatively overwhelming evidence that the quirky character's predictive ability actually works.
It's not about taking things on faith, it's about accepting that you don't have to know the inner workings of a model to realize that it's a good predictor. Or is that what you just said? I guess I need to watch the episode again, the moral must be different than what I remember.
I'd settle for a well-executed cartoon adaptation of Pratchett's Tiffany Aching books. Almost as good in terms of rationality, and a lot more marketable.

I’m a moderately long term lurker (a couple of years), and in the last ~6 months have had much more free time due to dropping all my online projects to travel around Asia. As a result of this is I’ve ended up reading a lot of of LW and having a huge amount of time to think. I really like understanding things, and it feels like a lot of parts of how the interesting bits of reality work which were tricky for many years are making much more sense. This is pretty awesome and I can’t wait to get back home and talk to people about it (maybe visit some LW meetups... (read more)

I guess you just have to try it. Make one article. Make it a standalone article about one topic. (Not an introduction to a planned long series of articles; just write the first article of the series. Not just the first half of an article, to be continued later; instead choose a narrower topic for the first article. As a general rule: links to already written articles are okay, but links to yet unexisting articles are bad; especially if those yet unexisting articles are used as an excuse for why the existing articles don't have a conclusion.) Put the article in Discussion; if it is successful and upvoted, someone will move it to Main. Later perhaps, when two of your articles were moved, put the third one directly in Main. The topics seems interesting, but it's not just what topic you write about, but also how do you write it. For example "The Layers of Evolution": I can imagine it written both very good and very badly. For example, whether you will only speak generally, or give specific examples; whether those examples will be correct or incorrect. (As a historical warning, read "the tragedy of group selectionism []" for an example of something that seemed like it would make sense, but at the end, it failed. There is a difference between imagining a mechanism, and having a proof that it exists.) If you have a lot of topics, perhaps you should start with the one where you feel most experienced.
The Fuzzy Pattern Theory of Identity could reasonably be created as a stand alone post, and probably the Layers of Evolution too. Guided Tour and Strange Loop of Consciousness too, though I'd rather have a few easier ones done before I attempt those. The other posts rely on one or both of the previous ones. Glad they seem interesting to you :). And yes, layers of evolution is the one I feel could go wrong the most easily (though morality and maths may be hardest to explain my point clearly in). It's partly meant as a counterpoint to Eliezer's post you linked actually, since even though altruistic group selection is clearly nonsense when you look at how evolution works, selfish group selection seems like it exists in some specific but realistic conditions (at minimum, single->multicellular requires cells to act for the good of other cells, and social insects have also evolved co-operation). When individuals can be forced to bear significant reproductive losses for harming the group selfishly, selfishly harming the group no longer is an advantage. The cost of punishing an individual for harming your group is much smaller than the cost of passing up chances to help yourself at the expense of the group, so more plausibly evolveable, but still requires specific conditions. I do need to get specific examples to cite as well as general points and do some more research before I'll be ready to write that one. I.. would still feel a lot more comfortable about posting something which at least one other person had looked over and thought about, at least for my first post. I've started writing several LW posts before, and the main reason I've not posted them up is worry about negative reaction due to some silly mistake. Most of my ideas follow non-trivial chains of reasoning and without much feedback I'm afraid to have ended up in outer mongolia. Posting to discussion would help a bit, but does not make me entirely comfortable. How about if I write up something on google docs,
I think that would remove a substantial portion of your potential readers. Just suck it up and post something rough in Discussion, even if it feels uncomfortable. For example: the piece that starts with "even though altruistic group selection is clearly nonsense" up until the end of the paragraph might be expanded just a little and posted stand-alone in an open thread. Gather reactions. Create an expanded post that addresses those reactions. Post it to Discusssion. Rinse and repeat.
I think the better course of action is if you just post your ideas first in the discussion section, let the feedback pour and then, based on what you receive, craft posts for the main section. After all, that's what the discussion section is for. In this way, you'll get a lot of perspectives at a time, without waiting the help of a single individual.
hm, you're suggesting making one rough post in discussion, then use feedback to make a second post in main? I can see how that's often useful advise, but I think I'd prefer to try and thoroughly justify from the start so would find it hard to avoid making a Main length post straight away. Revising the post based on feedback from Discussion before moving it seems like a good idea though.
Well, you have three layers in LW: a post in an open thread, a discussion post, a main post (who might get promoted). Within these you can pretty much refine any idea you want without losing too much karma (some you will lose, mind you, it's almost unavoidable), so that you can reject it quickly or polish it until it shines duly for the main section.

New family of materials discovered by accident

Does this suggest a problem with using Bayes to generate hypotheses? My impression is that Bayes includes generating hypotheses which look in the most likely places. Are there productive ways of generating accidents, or is paying attention when something weird happens the best we can do?

I think that Bayes is completely silent on how one should generate hypotheses...
1NancyLebovitz8y [] I'd have sworn that one of the first sequences I read was about improving science by using Bayes to make better choices among hypotheses. On the one hand, my memory is good but hardly perfect, and on the other choosing among hypotheses is related to but not exactly the same as generating hypotheses.
That, sure. You can easily argue that Bayesianism provides a better framework for hypothesis testing. But that's quite different from generating hypotheses.
An example of using Bayes to "generate hypotheses" that's successful is the mining/oil industry that makes spatial models and computes posterior expected reward for different drilling plans. For general-science type hypotheses you'd ideally want to put a prior on a potentially very complicated space (e.g. the space of all programs that compute the set of interesting combinations of reagents, in your example) and that typically isn't attempted with modern algorithms. This isn't to say there isn't room to make improvements on the state of the art with more mundane approaches.

Do you think that the Last Psychiatrist is male or female?


I'm not sure TLP is a single person.
Ooh! Is it stylometrics time?
Good point. You should probably choose N/A.
TLP's anonymity was actually broken a while ago. I won't post details here, but if you feel you need evidence feel free to PM me. He's male. However, this doesn't remove the possibility that some works published under the TLP brand were written by another author.
There's no need to be coy about it since obvious google searches bring up a quora question devoted to it and a reddit thread that is probably the original source (but not obviously about the doxxing from google's snippet). If you know an older discussion, I'd be interested. My running across his identity is much of my motivation for this poll, partly because the clock is ticking on my ability to run the poll, and partly because of knowing the correct answer, though I can't explain that. The other reason is that the answer female is a reactionary shibboleth, which came up again on Yvain's blog. I don't find at all plausible the possibility of an imposter. The possibility that it is always the same team is more plausible, though I think unlikely.
I believe I read someone claim that TLP is written by a married couple. Before I read that I assumed TLP was a man, however. The writing style seems pretty masculine.

Does anyone understand how the mutant-cyborg monster image RationalWiki uses represents Less Wrong? I've never understood that.

It's a "Lava Basalisk []". EDIT: See here: []
Picture of Lava Basalisk, as noted by Oscar_Cunningham, linking to the concept of Roko's Basilisk, which is a large point of interest for RW users.

Shit rationalists say: a fellow LessWronger describing a person from Facebook:

He is an intellectual. Well, not in the "he reads LessWrong" sense, but in the "he can express himself using a whole sentence" sense.

I kept laughing for a few minutes (sorry, it's probably less funny here out of context), and I promised I will post this on LW, keeping the author's identity secret. Ignoring the in-group applause lights, it is a nice way to express a difference between a person who merely provides interesting opinions, and a person who also ... (read more)