If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New to LessWrong?

New Comment
173 comments, sorted by Click to highlight new comments since: Today at 11:43 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A physics research team has members who can (and occasionally do) in secret insert false signals into the experiment the team is running. The goal is practice resistance to false positives. A very interesting approach, first time I've heard about physicists using it.

Bias combat in action :-)

The LIGO is almost unique among physics experiments in practising ‘blind injection’. A team of three collaboration members has the ability to simulate a detection by using actuators to move the mirrors. “Only they know if, and when, a certain type of signal has been injected,”...

Two such exercises took place during earlier science runs of LIGO, one in 2007 and one in 2010. ... The original blind-injection exercises took 18 months and 6 months respectively. The first one was discarded, but in the second case, the collaboration wrote a paper and held a vote to decide whether they would make an announcement. Only then did the blind-injection team ‘open the envelope’ and reveal that the events had been staged.


Wait, I'm confused. How does this practice resistance to false positives? If the false signal is designed to mimic what a true detection would look like, then it seems like the team would be correct to identify it as a true detection. I feel like I'm missing something here.
I don't know the details, but the detection process is essentially statistical and very very noisy. It's not a "we'll know it when we see it" case, it's more like "out of the huge number of wiggles and wobbles that we have recorded, what can't we explain and therefore might be a grav wave". I would guess one of the points is that a single observation is unreliable in a high-noise environment.
This is really fascinating, I wonder what other existing big science efforts 'blind injection' would benefit.

There is a list of blogs by LWers. There is a list of LWers on Twitter. There is a list of LWers on Tumblr.

Do there exist lists of LWers in other similar online communities, such as Pinterest, Instagram, DeviantArt, LiveJournal?


Two minutes' inspection of her thesis would, I think, lead any reasonable person to conclude that it was almost certainly not written by her adviser. The extremely unusual style is consistent with her adviser having, say, had all the actual clever mathematical ideas in it, but again the point here is merely that Piper is clearly intelligent, and being able to understand the material described in her thesis (which, again, I think it's clear she does if you actually look at the thesis) is itself indicative of a high IQ.

(PS. Hi, Eugine/Azathoth/Ra/Lion. This ... (read more)

An excellent piece about communication styles, in particular about a common type of interaction on the 'net which is sometimes seen on LW as well. I'll quote some chunks, but the whole thing is good.

Here’s a series of events that happens many times daily on my favorite bastion of miscommunication, the bird website. Person tweets some fact. Other people reply with other facts. Person complains, “Ugh, randos in my mentions.” Harsh words may be exchanged, and everyone exits the encounter thinking the other person was monumentally rude for no reason. ...


... (read more)

I agree with gjm that the remark about IQ is wrong. This is about cultures. Let's call them "nerd culture" and "social culture" (those are merely words that came immediately to my mind, I do not insist on using them).

Using the terms of Transactional Analysis, the typical communication modes in "nerd culture" are activity and withdrawal, and the typical communication modes in "social culture" are pastimes and games. This is what people are accustomed to do and to expect from other people in their social circle. It doesn't depend on IQ or gender or color of skin; I guess it depends on personality and on what people in our perceived "tribe" really are doing most of the time. -- If people around you exchange information most of the time, it is reasonable to expect that the next person also wants to exchange information with you. If people around you play status games most of the time, it is reasonable to expect that the next person also wants to play a status game with you. -- In a different culture, people are confused and project.

A person coming from "nerd culture" to "social culture" may be oblivious to the status... (read more)

I figured this was an absurd caricature, but then this thing floated by on tumblr: Objective facts: white, patriarchal, heteronormative, massively racist and ableist?
Sigh. These people are clearly unable to distinguish between "the territory" and "the person who talks about the territory". I had to breathe calmly for a few moments. Okay, I'm not touching this shit on the object level again. On a meta level, I wonder how much of the missing rationality skills these people never had vs how much they had but lost later when they became politically mindkilled.
I remember reading SEP on Feminist Epistemology where I got the impression that it models the world in somewhat different way. Of course, this is probably one of those cases where epistemology is tailored to suit political ideas (and they themselves most likely wouldn't disagree) but much less vice versa. When I (or, I suppose, most LWers) think about how knowledge about the world is obtained the central example is an empirical testing of hypotheses, i.e. situation when I have more than one map of a territory and I have to choose one of them. An archetypal example of this is a scientist testing hypotheses in a laboratory. On the other hand, feminist epistemology seems to be largely based on Feminist Standpoint Theory which basically models the world as being full of different people who are adversarial to each other and try to promote different maps. It seems to me that it has an assumption that you cannot easily compare accuracies of maps, either because they are hard to check or because they depict different (or even incommensurable) things. The central question in this framework seems to be "Whose map should I choose?", i.e. choice is not between maps, but between mapmakers. Well, there are situations where I would do something that fits this description very well, e.g. if I was trying to decide whether to buy a product which I was not able to put my hands on and all information I had was two reviews, one from the seller and one from an independent reviewer, I would be more likely to trust the latter's judgement. It seems to me that the first archetypal example is much more generalizable than the second one, and strange claims that were cited in a Pfft's comment is what one gets when one stretches the second example to extreme lengths. There also exists Feminist Empiricism which seems to be based on idea that since one cannot interpret empirical evidence without a framework, something must be added to an inquiry, and since biases that favour a desirable inter
Seems like the essential difference is whether you believe that as the maps improve, they will converge. A "LW-charitable" reading of the feminist version would be that although the maps should converge in theory, they will not converge in practice because humans are imperfect -- the mapmaker is not able to reduce the biases in their map below certain level. In other words, that there is some level of irrationality that humans are unable to overcome today, and the specific direction of this irrationality depends on their "tribe". So different tribes will forever have different maps, regardless of how much they try. Then again, to avoid "motte and bailey", even if there is the level of irrationality that humans are unable to overcome today even if they try, the question is whether the differences between maps are at this level, or whether people use this as a fully general excuse to put anything they like on their maps. Yet another question would be who exactly are the "tribes" (the clusters of people that create maps with similar biases). Feminism (at least the version I see online) seems to define the clusters by gender, sexual orientation, race, etc. But maybe the important axes are different; maybe e.g. having high IQ, or studying STEM, or being a conservative, or something completely different and unexpected actually has greater influence on map-making. Which is difficult to talk about, because there is always the fully general excuse that if someone doesn't have the map they should have, well, they have "internalized" something (a map of the group they don't belong to was forced on them, but naturally they should have a different map).
Can rationality be lost? Or do people just stop performing the rituals?
Heh, I immediately went: "What is rationality if not following (a specific kind of) rituals?" But I guess the key is the word "specific" here. Rationality could be defined as following a set of rules that happen to create maps better corresponding to the territory, and knowing why those rules achieve that, i.e. applying the rules reflectively to themselves. The reflective part is what would prevent a person from arbitrarily replacing one of the rules by e.g. "what my group/leader says is always right, even if the remaining rules say otherwise". I imagine that most people have at least some minimal level of reflection of their rules. For example, if they look at the blue sky, they conclude that the sky is blue; and if someone else would say that the sky is green, they would tell them "look there, you idiot". That is, not only they follow the rule, but they are aware that they have a rule, and can communicate it. But the rule is communicated only then someone obviously breaks it; that means, the reflection is only done in crisis. Which means they don't develop the full reflective model, and it leaves the option of inserting new rules, such as "however, that reasoning doesn't apply to God, because God is invisible", which take priority over reflection. I guess these rules have a strong "first mover advantage", so timing is critical. So yeah, I guess most people are not, uhm, reflectively rational. And unreflective rationality (I guess on LW we wouldn't call it "rationality", but outside of LW that is the standard meaning of the word) is susceptible to inserting new rules under emotional pressure.
I don't see why not. It is, basically, a set of perspectives, mental habits, and certain heuristics. People lose skills, forget knowledge, just change -- why would rationality be exempt?
Habits and heuristics are what I'd call "rituals." Are perspectives something you can lose? I ask genuinely. It's not something I can relate to.
I don't know about that. A heuristic is definitely not a ritual -- it's not a behaviour pattern but just an imperfect tool for solving problems. And habits... I would probably consider rituals to be more rigid and more distanced from the actual purpose compared to mere habits. Sure. You can think of them as a habitual points of view. Or as default approaches to issues.
Can rationality be lost? Sure, when formerly rational people declare some topic of limits to rationality because they don't like the conclusions that are coming out. Of course, since all truths are entangled that means you have to invent other lies to protect the ones you've already made. Ultimately you have to lie about the process of arriving at truth itself, which is how we get to things like feminist anti-epistomology.
What about that sentence makes you think that the person isn't able to make that distinction? If you look at YCombinator the semantics are a bit different but the message isn't that different. YCombinator also talks about how diversity is important. The epistemic method they teach founders is not to think abstractly about a topic and engage with it analytically but that it's important to speak to people to understand their own unique experiences and views of the world. David Chapman's article going down on the phenomenon is also quite good.
It's interesting how the link you posted talks about importance of using the right metaphors, while at the same time you object against my conclusion that people saying "logic itself has white supremacist history" can't distinguish between the topic and the people who talk about the topic. To explain my position, I believe that anyone who says either "logic is sexist and racist" or "I am going to rape this equation" should visit a therapist.
Nobody linked here says either of those things. In particular the orginal blog posts says about logic: The argument isn't that logic is inherently sexist and racist and therefore bad but that it's frequently used in places where there are other viable alternatives. That using it in those places can be driven by sexism or racism.
Such as?
Interviewing lot's of people to understand their view points and not to have conversations with them to show them where they are wrong but be non-judgemental. That's basically what YC teaches. Reasoning by analogy is useful in some cases. There's a huge class of expert decisions that's done via intuition. Using a technique like Gendlin's Focusing would be a way to get to solutions that's not based on logic.
I guess your theory is the same as what Alice Maz writes in the linked post. But I'm not at all convinced that that's a correct analysis of what Piper Harron is writing about. In the comments to Harron's post there are some more concrete examples of what she is talking about, which do indeed sound a bit like one-upping. I only know a couple of mathematicians, but from what I hear there are indeed lots of the social games even in math---it's not a pure preserve where only facts matter. (And in general, I feel Maz' post seems a bit too saccharine, in so far as it seems to say that one-up-manship and status and posturing do not exist at all in the "nerd" culture, and it's all just people joyfully sharing gifts of factual information. I guess it can be useful as a first-order approximation to guide your own interactions; but it seems dangerously lossy to try to fit the narratives of other people (e.g., Harron) into that model.)
I'm not sure whether "social culture" is a good label. Not every social interaction by non-nerds is heavily focused on status. There's "authencity culture" whereby being authentic and being open is more important than not saying something that might lower someone's status.
The social person is right here. Remember 'X is not about Y'?. The difference is that your 'social culture' person is in fact low-to-average status on the relevant hierarchy. Something that's just "harmless social banter" to people who are confident in their social position can easily become a 'status attack', or a 'microaggression' from the POV of someone who happens to be more vulnerable. This is not limited to information-exchange at all, it's a ubiquitous social phenomenon. And this dynamic makes engaging in such status games a useful signal of confidence, so they're quite likely to persist.
I think that one very important difference between status games and things that might remind people of status game is how long they are expected to stay in people's memory. For example, I play pub quizzes and often I am the person who is responsible for the answer sheet. Due to strict time limits, discussion must be as quick as possible, therefore in many situations I (or another person who is responsible for the answer sheet) have to reject an idea a person has came up with based on vague heuristic arguments and usually there is no time for long and elaborate explanations. From the outside, it might look like a status related thing, because I had dismissed a person's opinion without a good explanation. However, the key difference is that this does not stay in your memory. After a minute or two, all these things that might seem related to status are already forgotten. Ideally, people should not even come into picture (because paying attention to anything else but the question is a waste of time) - very often I do not even notice who exactly came up with a correct answer. If people tend to forget or not even pay attention whom a credit should be given, also, if they tend to forget cases where their idea was dismissed in favour of another person's idea. In this situation, small slights that happened because discussion should be as quick as possible are not worth remembering, one can be pretty certain that other people will not remember them either. Also, if "everyone knows" they are to be quickly forgotten, they are not very useful in status games either. If something is forgotten it cannot be not forgiven. Quite different dynamics arise if people have long memories for small slights and "everyone knows" that people have long memories for them. Short memory made them unimportant and useless for status games, but in the second case where they are important and "everyone knows" they are important, they become useful for social games and therefore a greater proportion
Yes, this is an important aspect. I think what people usually keep in mind are not the specific mistakes, but status and alliances. In the "nerd culture", the individual mistakes are quickly forgotten... however, if someone makes mistakes exceptionally often, or makes a really idiotic mistake and then insists on it, they may gain a long-term reputation of an idiot (which means low status). But even then, if a well-known idiot makes a correct statement, people are likely to accept this specific statement as correct. In the "social culture", it's all about alliances and power. Those change slowly, therefore the reactions to your statements change slowly, regardless of the statements. If you make a mistake and people laugh at you because you are low-status and it is safe to kick you, next time if you make a correct statement, someone may still make fun of you. (But when a high-status person later makes essentially the same statement, people will accept it as a deep wisdom. And they will insist that it is totally not the same thing that you said.) It's not important what was said, but who said it. Quick changes only come when people change alliances, or suddenly gain or lose power; but that happens rarely.
The pub quiz you play has clearly defined status. You lead it. As such there's not the uncertainty about status that exists in a lot of other social interactions.
You're confusing two points of view. Let's say social Sally is talking to nerdy Nigel. From the point of view of Sally, there are a lot of microaggressions, and status attacks, and insensitivity, etc. But that is not because Nigel is cunningly conducting a "devious status game", Nigel doesn't care about status (including Sally's) and all he wants to do is talk about his nerdy stuff. Nigel is not playing a let's-kick-Sally-around game, Sally is misperceiving the situation.
Oh, Nigel may not care about Sally's status - that much is clear enough, and I'm not disputing it. He cares a lot about his own status and the status of his nerdy associates, however. That's one reason why he likes this "bzzzzzzt, gotcha!" game so much. It's a way of saying: "Hey, this is our club; outsiders are not welcome here! Why don't you go to a sports bar, or something." Am I being uncharitable? Perhaps so, but my understanding of Nigel's POV is as plausible as yours.
Our friend Nigel may or may not play status games of his own, but my issue was with you saying And, nope, the social person is not. Of course, it all depends on the situation and she may be right, but, generally speaking, feeling like an outsider does NOT mean that everyone is playing devious status games against you.
Depends on whether "bzzzzzzt, gotcha!" is applied more frequently to the outsiders than to the insiders when they make the same mistake. In other words, does "making a mistake" screen off "being an outsider"?
I'm not sure it does depend on that. Suppose your ingroup is made up predominantly of people with ginger hair and your outgroup predominantly of people with brown hair. Then if you make fun of people with brown hair, and admire people with ginger hair, you're raising the status of your ingroup relative to your outgroup even if you apply this rule consistently given hair colour. Similarly, if your ingroup is predominantly made up of people who don't make a certain kind of mistake and your outgroup is mostly made up of people who do. It's not clear to me that there's a good way to tease apart the two hypotheses here. And of course they could both be right: Nigel may sincerely care about the nerdy stuff but also on some level be concerned about raising the status of his fellow nerds.
On color of skin, no, but on IQ somewhat. This is so for two reasons. The first one is capability to learn -- a sufficiently high-IQ person will be able to figure out what's happening and adjust. An insufficiently-high-IQ person will not and will be stuck in unhappy loops. The second one is that the nerd culture of sharing information depends on the ability to understand and value that information. If you don't understand what the nerds are talking about, you have to fall back on social games because you have no other options. That's what I mistakenly thought was happening with the mathematician quote in the grandparent comment -- turned out I was wrong, but such situations exist. Oh, and gender plays a role, too. Women are noticeably more social than men, so the nerd cultures tend to be mostly male.
Note that in most IM conversations and texts, ending a message with a period makes one seem angry or insincere (see here).
That should depend on the rules of the language. My supervisor sometimes texted me with You will come (in Ukrainian) when we had not scheduled a meeting, I would rush in to see what got him, and find out he forgot the question mark (again).
It's not clear to me that the other person really was "born on the other side of IQ tracks". (Unless you just mean that she's female and black, I guess?) I mean, she did a PhD in pure mathematics. Some of the things she says about it and about her experience in mathematics are certainly ... such as might incline the cynical to think that she actually just isn't very good at mathematics and is trying some passive-aggressive thing where she half-admits it and half-blames it on The Kyriarchy. But getting to the point at which anyone is willing to consider letting you do a mathematics PhD (incidental note: her supervisor is a very, very good mathematician) implies, I think, a pretty decent IQ. For the avoidance of doubt, I am not myself endorsing the cynic's position above. I haven't looked at her thesis, which may in fact make it clear that she's a very good mathematician indeed. In which case her difficulties might in fact be the result of The Kyriarchy, or might be the result of oversensitivity on her part, or any combination thereof. Or in fact might simply be a useful rhetorical invention.
Ah, I didn't follow the link to Piper's blog so my expression was misguided -- I take it back. In this case, I think, her complaint reflects the status game mismatch -- either she's playing it and her conversation partner isn't, or vice versa, she is not and he is. It's hard to tell what is the case.
Do try to.
Since writing the above, I have. It's ... extremely unusual.

Löb's theorem states that "If it's provable that (if it's provable that p then p), then it's provable that p." In addition to being a theorem of set theory with Peano arithmetic, it's also a theorem of modal logic.

Try this on for size: If I believe that (if I believe that this chocolate chip will cure my headache, then this chocolate chip will cure my headache), then I believe that this chocolate chip will cure my headache.

-Agenty Duck

Nitpick: it would be better to write "also a theorem of epistemic logic", since there are other modal logics where it is not provable. (E.g. just modal logic K).

Obvious in hindsight: one cause of massive bee death turned out to be neonicotinoids. In other words, newsflash: insecticides kill insects.

Was there any way this could have been anticipated?

It's not obvious that use of a pesticide would substantially harm bees, as pesticides have been in use for a very long time, and many organophosphate pesticides are fairly non-toxic to bees. Neonicotinoids, however, are extremely toxic to bees. The use of neonicotinoids is fairly recent; large-scale use only started in the late 90's, and very soon after that beekeepers started filing petitions to the EPA. They were ignored. I'd say this is more a case of systemic and deliberate ignorance/politics rather than a 'mistake'.
Be more conservative. Require more evidence before you allow a new insecticide to come to market.

Nate Soares' recent post "The Art of Response" on Minding Our Way talks about effective response patterns that people develop to deal with problems. What response patterns do you use in life or in your field of expertise that you have found to be quite effective?

I finally gave in and opened a Tumblr account at http://dooperator.tumblr.com/ . This open-thread comment is just to link my identity on Less Wrong with my username on websites where I do not want my participation to be revealed by a simple Google search for my name, such as SlateStarCodex and Tumblr.


Information coupled with suprise this week:

the chance of transmission during any single episode of unprotected vaginal sex is estimated at a 1 in 2,000. Thus, the odds you were infected are 0.05 x 0.0005 = 0.000025, i.e. 1 in 40,000. That's less than your lifetime risk of getting killed by lightning (if you live in the US) and less than the chance you will die in the coming week in some sort of accident. As for other STDs, the lack of symptoms is a strong indicator that you didn't catch anything.


A less authoritative but more nuanced relevant... (read more)

I wouldn't put too much faith in the 1/2000 figure for chance of HIV transmission. There is no known way to calculate that with any reasonable confidence. Estimates vary from something like 1/500 to 1/2500 (this is for vaginal sex; anal sex has much higher transmission risk).

I'm an undergrad going for a major in statistics and minors in computer science and philosophy. I also read a lot of philosophy and cognitive science on the side. I don't have the patience to read through all of the LW sequences. Which LW sequences / articles do you think are important for me to read that I won't get from school or philosophy reading?

Check out the Rationality: A to Z contents page, click on things that look interesting, it'll mostly work out. A Human's Guide to Words is really good exposition of philosophy. The subsequence of thinking about morality that I can point at with the post fake fake utility functions is good too. Or if you just want to learn what this rationality stuff is about, read the early posts about biases and read Knowing about biases can hurt people. That one's important - the point of knowing about biases is to see them in yourself. I just don't know what suits you, is all.
One of the chief benefits of reading through the sequences is being able to notice, label, and communicate many different things. Instead of having a vague sense that something is wrong and having to invent an explanation of why on the spot, I can say "oh, there's too much inferential distance here" or "hmm, this argument violates conservation of expected evidence" or "but that's the Fallacy of Gray." But in order to have that ability, I need to have crystallized each of those things individually, so that I can call on it when necessary. But if you're only going to read one thing, A Human's Guide to Words (start here) is probably going to be the most useful, especially going into philosophy classes.
I would add that most of those things can also be found in other sources; sometimes they have different names. But the practical question is: have you read those "other sources"? If not, then the Sequences are a compressed form of a lot of useful stuff. They may be long, but reading all the original sources would be much longer. (This is not to discourage people from reading the other sources, just saying that if "that's too much text" is your real objection, then you probably haven't read them.)
Unfortunately, I think many of the people who come to LessWrong are in the position of having read about 50-75% of the content of the sequences through other sources, and may become frustrated by the lack of clear indication within the sequences as to what the next post actually includes.... it is very annoying to read through a couple of pages only to find that this section has just been a wordy setup to reviewing basic physics.
What % do you define as "many"? Those percentages of content already known sound very high to me in regards to the first 1/3rd of the Sequences. (I'm still working on the rest so can't comment there.) Also, they can use the Article Summaries to test out whether they've seen the concept before and then read the full article or not. I don't recommend just reading the summaries though. I think a person doing that would be doing a disservice to themselves because of the reasons supplied by Vaniver above.
The Quantum Mechanics sequence - you won't get that in school.
How about the Grad Student Advice Repository?
I'm more interested more in epistemic rationality concepts rather than practical life advice, although good practical advice is always useful.

Smoking cigarettes is very protecting against parkinsons. The evidence is clear, large and replicated in large samples. Hypothetically, would someone with strong genetic indications of risk for parkinsons, and genetic indications that they are protective against cardiovascular disease, lung cancer and that kind of other smoking related diseases be making a healthy choice to start smoking?.


Presumably it's the nicotine that has this effect. You can get nicotine into your system in ways less unhealthy than smoking cigarettes.

SSC describes a related affect and also mentions parkinsons in SCHIZOPHRENIA: NO SMOKING GUN (recent).
Thanks for that. What a blog! My personal experiences with psychosis bias me towards theories that play up suggestibility as a key feature in psychotic disorders. I’d favour an alternative hypothesis that the marketing of cigarettes (e.g. presence in movies, positioning as a bad and cool vice) encourages sz’s to take up smoking, then addiction and receptor biases maintain the habit.

Anecdotally, Russians and Englishmen talk (pronounce) Latin [names of biological taxa] rather differently. In my opinion, not really informed because we did not have a Latin course, saying '-aceae' as 'ayshae' is wrong, and although I know people do that it still throws me off for a moment. Still, I've just realized that there are non-English biologists who mangle Latin as they wish. Has anyone got any data on how widespread is the English Latin?

Yes, "Englishing" Latin is a pet peeve of mine.
All these people going around scribbling ROMANES EUNT DOMUS on walls... :-)
I do: here in Italy we speak the most direct descendant of Latin and news casters still pronunce Latin words as if they were English.
Every country has a different pronunciation of Latin. Standardizing on the English version sounds like an improvement to me.
English has awful, unintuitive pronunciation rules. Almost any other Indoeuropean language would be better. I would prefer Spanish or Italian.
The standard pronunciation of Latin by English speakers doesn't follow English pronunciation rules. I added a link. Italian pronunciation was a possible standard, since it is generally used by the Catholic Church. But that doesn't seem likely to spread in Russia.
Italian pronunciation rules are different from those of Classical Latin. Even Ecclesiastical Latin sounds different from Classical Latin, and closer to the modern Italian norm. My school priest pronounced Humanae Vitae as "oo-man-eh bee-teh," whereas in ancient times it would have been "hoo-man-eye wee-tye."

Yes, I'm aware politics kills minds.

What did Obama do wrong?

I hear people say (1) the economy didn't grow fast enough and (2) the U.S. is weaker, globally.

Is there objective evidence of either of these claims? Or is this mostly just blue vs. green tribalism?

The economy definitely is not growing fast enough, but blaming Obama doesn't really make sense. Very weak growth is a problem throughout the developed world, and the US economy is if anything better than average. Leaving aside issues that are primarily questions of personal values, I see a couple of important failures that seem pretty objective. * Affordable Care Act: The rollout of Healthcare.gov was an embarrassing debacle, but the law itself just isn't very good--even from a liberal perspective (the basic plan was originally a proposal by the right-wing Heritage Foundation). It doesn't achieve anything like universal coverage, there have been continued large increases in insurance premiums, the insurance "corridors" are hemorrhaging money faster than expected, and there are some signs of the "death spiral." (United Health is losing so much money they plan to exit the [individual] market.) Even Obama has admitted that "if you like your health plan, you can keep it" turned out not to be true. Keep in mind that ACA was designed so that many of its aspects don't take full effect for years, so we still don't really know how things will shake out, but it's clear Obama's signature legislation isn't curing America's healthcare woes. * Obama administration policies of supporting regime change against secular Arab governments has basically been a disaster, leading to disastrous civil wars in Libya and Syria. Islamists are almost certainly a lot stronger than they would have been if the administration had done nothing. The side effects of this are disastrous for long-term US policy goals like supporting European integration, since the resulting refugee crisis has (temporarily?) killed Schengen and made the nationalist parties in Europe stronger. And the crisis is ongoing, we have no idea how bad it will get.
Which liberal health policy experts have you been reading to get that impression of the Affordable Care Act? Most liberal economists I have read have mixed feelings on the act, but think it was largely an improvement. While conservatives would probably agree with most of your statement, I would hardly call your view an objective one if a lot of experts would disagree with it. Here is Austin Frakt on the Affordable Care Act.
I'm saying the law, taken on its merits, is not actually good by the standards liberals profess. I'm aware most liberals supported it (with some grumbling) but I think that's mainly because of Halo Effect/Affective Death Spiral. If George W. Bush had proposed this, I suspect liberals would have criticized it for locking us even deeper into the private insurance trap (giving corporations a captive market).
Thank you for the reply. This is interesting. Is the U.S. health care system as a whole better than before the ACA in your view? Also, could Obama have gotten anything more liberal—like universal coverage—through congress? What are your politics?
No. I'd mostly prefer market-oriented reforms for healthcare (plus vouchers), but right now we tend to get the worst of both worlds. Single payer would also probably be better than what we have now. The main obstacle wasn't really that it was too liberal. Opposition from the insurance lobby is what killed "Hillarycare" back in 93 even though Democrats had huge majorities then as well. Once the insurance lobby got the "public option" removed from the legislation, they supported it. Mostly paleoconservative, less opposed to "big government" than most paleocons.
He created very high expectations (remember Hope & Change?) and massively underperformed. Basically, he turned out to be a mediocre President, not horrible, but not particularly good either. He disappointed an awful lot of people. As to claims that you mention, Presidents have little control over economy. Economic growth is just not a function of who currently lives in the White House. With respect to "weaker globally", it's a complicated discussion which should start with whether you want US to be a global SWAT team.
Thank you! And "Yes We Can!". :) I guess all political slogans blend together for me. All of this year's nominees are making similar over-the-top type claims about what they will accomplish. I'm sincerely surprised anyone believes any of them. One "change" that happened was the ACA. I know this is contentious depending on your politics, but it at least qualifies as the sort of "change" Obama's constituents likely had in mind when electing him. Do you have any metrics in mind to support this? Presidential rankings seem problematic to me. Especially trying to rank Obama so early on, since we haven't seen the long term impact of anything he has done. This is also my sense, though I don't know much about economics. My terribly over-simplified view is that the economy was horrible in 2008, and now it is much better. So that is good. And while I don't give Obama anything like full credit for that, I also don't accept criticism that he made the economy worse or didn't grow it "enough". This is my view as well. I have no idea where critics of Obama get the evidence that the US is less safe now that 2008. I'm assuming it's just tribal politics, but would be open to arguments.
I don't want to go into comparisons of "balance sheets" of good things he did versus bad things he did. That's prime minefield territory and LW isn't a good place for such discussions.
The thing to consider about the economy is that the president is not only not responsible, but mostly irrelevant. An easy way to see this is the 2008 stimulus packages. Critics of the president frequently share the graph of national debt which grows sharply immediately after he took office - ignoring that the package was demanded by congress and supported by his predecessor, who wore a different color shirt. A key in evaluating a president is the difference between what he did, what he could have done, and what people think about him. Consider that the parties were polarizing before he took office. In terms of specifics, I am disappointed that he continued most of the civil rights abuses of the previous administration with regards to due process. I also oppose the employment of the drone warfare doctrine, which is minimally effective at achieving strategic goals and highly effective at generating ill will in the region. By contrast, I am greatly pleased at the administrations' commitment to diplomacy and improvement of our reputation among our allies. I am pleased that major combat operations were ended in two theaters, and that no new ones were launched. I applaud the Iranian nuclear agreement.
So what about Libya? What about the fight against ISIS? The former was a quick-strike operation that caused the country in question to go to hell fast. The latter is an example of things going to hell so badly after a "successfully ended operation" that we had to intervene again.
As compared to what alternative? There is no success condition for large scale ground operations in the region. If the criticism of the current administration is "failed to correct the lack of strategic acumen in the Pentagon" then I would agree, but I wonder what basis we have for expecting an improvement. It seems to me we can identify problems, but have no available solutions to implement.
Well, not intervening in Libya for starters.
What are your criteria for good foreign policy choices then? You have conveyed that you want Iraq to be occupied, but Libya to be neglected, so non-intervention clearly is not the standard. My current best guess is 'whatever promotes maximum stability'. Also, how do you expect these decisions are currently made?
I wouldn't object nearly as much to occupying Libya as to what Obama actually did. Namely, intervene just enough to force Gaddafi out and leave a huge mess. Actually I would still object, but that's because Gaddafi had previously abandoned his WMD program under US pressure. So getting rid of him now sends a very bad message to other thrid world dictators contemplating similar programs.
What like Libya? Or the fight against ISIS? The former is an example of a fast intervention that caused things to go straight to hell. The latter is an example of him "ending an operation" and things going to hell so badly that he had to intervene again.
I think Obama's greatest accomplishment was the overhaul of military spending he worked with Secretary Robert Gates on at the start of his administration. I'm also highly supportive of his executive actions on immigration reform. I find the Affordable Care Act to be difficult to evaluate. They made so many changes at once that it's hard to ascertain their net effect on health care overall. Yes, increases in health care costs have gone down. Yes, younger people are spending more on insurance that they probably don't need. Yes, there are multiple ways to improve the system which are not politically feasible. I think Obama's biggest failure was Libya. The US should stop supporting rebellions, or invading countries. It's never clear what's going to happen when the revolutionaries take over, or the new regime is in place, and the war itself is always bad. The issue I find most perplexing is wiretapping. It seems like Obama didn't do anything about it, and nobody really seems to have cared. Other failures can be explained away as the fault of Congress such as his failure to close Guatanamo Bay, but I don't think the wiretapping issue can. One thing people don't talk about enough is the unprecedented slowdown in the growth of government spending these past few years. Look at what happened with nominal government spending. I think this is principally due to the Tea Party because it coincides with their rise and fall almost exactly, but I still think Obama's role in this brief change is an important one. Alex Tabarrok's views on the subject from 2008 come across to me as prescient.
IIRC, that was Nicolas Sarkozy's idea. Obama's fault is that he joined him. Back in mid 1990s USA and the whole Western World was heavily criticized for not intervening in Rwanda conflict and many people in the US and Europe took that criticism to their hearts and now they tend to err in an opposite direction.
That's not a bug, that's a feature, working as designed.
What are you trying to explain? Why do you believe that Obama did anything wrong? Are you trying to explain his approval ratings? Shouldn't ~50% approval be your default assumption of political polarization? If so, there is nothing to explain. Are they very different from other presidents? A little lower, but nothing out of the ordinary. W's peak approval was just after 9/11. Clinton's peak approval was during the impeachment. Clinton's rose over the course of his term, while W's and Obama's fell. I guess you could interpret that as judging their actions, but W's ended low and Obama's ended mediocre. Added: better than the summary statistics in wikipedia are these graphs (correcting the dead link in wikipedia). Obama had a two year honeymoon period and has bounced around 50/50 since then.
Anger from the political right. Though it's generally what I would expect given the nature of politics, I want to understand if there is an objective basis for opposition to Obama...or if it is just pure blue vs. green stuff. I have a sense race plays a big part of the right's hatred of him, but I'm not sure how to go about validating this.
My link also gives peak disapproval ratings. Obama is perfectly normal. W is an outlier, with a peak disapproval of 71%. Other than him, all the presidents since Ford had a peak disapproval of 54-60%. (Ford didn't have time to do anything to merit disapproval.) Obama is exactly in the middle. (Average disapproval is probably a better metric, though.)
Anecdotally, a lot of the anger came from him pardoning Nixon.
Sure, his ratings (archive) crashed from 71/3 on inauguration to 50/28 after the pardon, but that just took him to a fairly normal level.
Interesting. Good info. Thank you.
I don't see any unusual anger. It's election year, so the usual suspects are already hard at work operating their mud-throwers at max volume and intensity... What in politics would you consider to be an "objective basis"?
I'm not sure. Perhaps there is very little that can be considered objective, since the two parties have competing definitions of success. Are you saying there are is no objective way to evaluate a president's performance? Which measures did you use to conclude the following?
Evaluating performance necessarily involves specifying goals and metrics. If you provide hard definitions of the goals that you're interested in, as well as precise specifications of the metrics, plus a particular weighting scheme for combining performance numbers for multiple goals, well, then you can claim that you are objectively evaluating the performance. The problem is that you're evaluating a very narrow idea of performance, one that involves the goals and the metrics and the weights that you have picked. Other people can (and probably will) say that your goals are irrelevant, your metrics are misleading, and your weights are biased X-) I listened to my feelings :-P

Paperclip maximizer thought experiment makes a lot of people pattern match AI risk to Science Fiction. Do you know any AI risk related thought experiments that avoid that?

Major AI risk is science fiction -- that is, it's the kind of thing science-fiction stories get written about, and it isn't something we have experience of yet outside fiction. I don't see how any thought experiment that seriously engages with the issue could not pattern-match to science fiction.
There is a field that thinks hard about risks from unintelligent computers (computer security) that tackles very difficult problems that sometimes get written about in popular fiction (Neil Stephenson, etc.) and manages to not look silly. I think to the extent that (U)FAI research is even a "real area," it would be closest in mindset to computer security.
Computer security as portrayed on TV frequently does look silly.
This is a fully general counterargument: "X as portrayed on TV frequently does look silly"
A fully general argument is not "an argument where you can substitute something for X and get something grammatical". Not all things look silly on TV with the same frequency.
I endorse Lumifer's quibble about the field of computer security, with the caveat that often the fact that the risks happen inside computer systems is much more important than the fact that they come from people. The sort of "value alignment" questions MIRI professes (I think sincerely) to worry about seem to me a long way away from computer security, and plausibly relevant to future AI safety. But it could well be that if AI safety really depends on nailing that sort of thing down then we're unfixably screwed and we should therefore concentrate on problems there is at least some possibility of solving...
I think my point wasn't about what computer security precisely does, but about the mindset of people who do it (security people cultivate an adversarial point of view about systems). My secondary point is that computer security is a very solid field, and doesn't look wishy washy or science fictiony. It has serious conferences, it has research centers, industry labs, intellectual firepower, etc.
3Wei Dai8y
I'm not sure how much there is to learn from the field of computer security, with regard to the OP's question. It's relatively easy to cultivate an adversarial mindset and get funding for conferences, research centers, labs, intellectual firepower, etc., when adversaries exist at the present time and are causing billions of dollars of damage each year. How to do that if the analogous adversaries are not expected to exist for a decade or more, and we expect it will be too late to get started once the adversaries do exist?

...Can we consider computer security a success story at all? I admit, I am not a professional security researcher but between Bitcoin, the DNMs, and my own interest in computer security & crypto, I read a great deal on these topics and from watching it in real-time, I had the definite impression that, far from anyone at all considering modern computer security a success (or anything you want to emulate at all), the Snowden leaks came as an existential shock and revelation of systemic failure to the security community in which it collectively realized that it had been staggeringly complacent because the NSA had devoted a little effort to concealing its work, that the worst-case scenarios were ludicrously optimistic, and that most research and efforts were almost totally irrelevant to the NSA because the NSA was still hacking everyone everywhere because it had simply shifted resources to attacking the weakest links, be it trusted third parties, decrypted content at rest, the endless list of implementation flaws (Heartbleed etc), and universal attacks benefiting from precomputation. Even those who are the epitome of modern security like Google were appalled to discover how, rather... (read more)

It's a success story in the sense that there is a lot solid work being done. It is not a success story in the sense that currently, and for the foreseeable future, attack >> defense (but this was true in lots of other areas of warfare throughout various periods of history). We wouldn't consider armor research not a success story just because at some point flintlocks phased out heavy battlefield armor. The fact that computer security is having a hard time solving a much easier problem with a ton more resources should worry people who are into AI safety.

We wouldn't consider armor research not a success story just because at some point flintlocks phased out heavy battlefield armor.

I think you missed the point of my examples. If flintlocks killed heavy battlefield armor, that was because they were genuinely superior and better at attack. But we are not in a 'machine gun vs bow and arrow' situation.

The Snowden leaks were a revelation not because the NSA had any sort of major unexpected breakthrough. They have not solved factoring. They do not have quantum computers. They have not made major progress on P=NP or reversing one-way functions. The most advanced stuff from all the Snowden leaks I've read was the amortized attack on common hardwired primes, but that again was something well known in the open literature and why we were able to figure it out from the hints in the leaks. In fact, the leaks strongly affirmed that the security community and crypto theory has reached parity with the NSA, that things like PGP were genuinely secure (as far as the crypto went...), and that there were no surprises like differential cryptanalysis waiting in the wings. This is great - except it doesn't matter.

They were a revelation because they reve... (read more)

Eliezer has said that security mindset is similar, but not identical, to the mindset needed for AI design. https://www.facebook.com/yudkowsky/posts/10153833539264228?pnref=story
Well, what a relief!
A fair point, though that mindset is hacker-like in nature. It is, basically, an automatic "how can I break or subvert this system?" reaction to everything. But the thing is, computer security is an intensely practical field. It's very much like engineering: has to be realistic/implementable, bad things happen if it fucks up, people pay a lot of money to get good solutions, these solutions are often specific to the circumstances, etc. AI safety research at the moment is very far from this.
Not quite. Computer security deals with managing risks coming from people, it's just that the universe where it has to manage risks is a weird superposition of the physical world (see hardware or physical-access attacks), the social world (see social engineering attacks), and the cyberworld (see the usual 'sploit attacks).
I think many people intuitively distrust the idea that an AI could be intelligent enough to transform matter into paperclips in creative ways, but 'not intelligent enough' to understand its goals in a human and cultural context (i.e. to satisfy the needs of the business owners of the paperclip factory). This is often due to the confusion that the paperclip maximizer would get its goal function from parsing the sentence "make paperclips", rather from a preprogrammed reward function, for example a CNN that is trained to map the number of paperclips in images to a scalar reward.
Could well be. Does that have anything to do with pattern-matching AI risk to SF, though?
Just speaking of weaknesses of the paperclip maximizer though experiment. I've seen this misunderstanding at least 4 out of 10 times that the thought experiment was brought up.
If you are just trying to communicate risk, analogy to a virus might be helpful in this respect. A natural virus can be thought of as code that has goals. If it harms humankind, it doesn't 'intend' to, it is just a side effect of achieving its goals. We might create an artificial virus with a goal that everyone recognizes as beneficial (e.g., end malaria), but that does harm due to unexpected consequences or because the artificial virus evolves, self-modifying its original goal. Note that once a virus is released into the environment, it is nontrivial to 'delete' or 'turn off'. AI will operate in an environment that is many times more complex: "mindspace".

Assuming it was written by her and not her adviser.

The writing doesn't sound like the same voice as her advisor's (e.g. arXiv:1402.1131). OTOH it is plausible that most of the original research in it was the latter's. Also, the fact that she doesn't seem to have ever published anything else is pretty suspicious. EDIT: also, she took ten years to finish it.

All in all, I'd guess her IQ is above 100 but below 130.

That only postpones the problem for a few years, unless you establish a permanent military presence.

The US can keep 100,000+ soldiers on the ground for 7 years, have all of its top military brass focus on that conflict, fight cleverly and aggressively against the opposition, lead the country through the process of drafting a constitution and holding elections, train the new military and police forces, spend tens of billions of dollars helping to build the country's infrastructure (in addition to hundreds of billions of dollars of military spending), gradually remove its troops in an orderly fashion as negotiated with the country's new government, and still have everything go horribly within a couple years of leaving.

Why boredom is anything but boring

Implicated in everything from traumatic brain injury to learning ability, boredom has become extremely interesting to scientists.


But I now thought that this end [one's happiness] was only to be attained by not making it the direct end. Those only are happy (I thought) who have their minds fixed on some object other than their own happiness[....] Aiming thus at something else, they find happiness along the way[....] Ask yourself whether you are happy, and you cease to be so.

-John Stuart Mill, the utilitarian philosopher, in his autobiography

Policy Debates Should Not Appear One-Sided. Is there testimony against the one-sided evidence-by-testimony for the paradox of hedonism at the ... (read more)

Has anyone heard of Amazon using drones for actual deliveries? Or are they still just in testing?

http://www.marketwatch.com/story/drone-delivery-is-already-here-and-it-works-2015-11-30 suggests that a main problem holding back drone deliveries is governmental regulation.
Amazon mentions regulation, but it also says that there is a lot of testing ahead.
They're in the process of weaponizing the drones to fight Uber's self-driving delivery cars.
I've heard a rumor Amazon isn't really taking the "Amazon Prime Air" concept very seriously and is just doing it to try to spur delivery companies into improving service.

If you made an incorrect statement and this gets pointed out, you will lose status for admitting it

LW culture is built specifically to encourage and reward correcting oneself.

Yup. Though the original post was specifically about mainstream culture. (Twitter culture?)

I agree with your implied point that putting boots on the ground for a few years (and then removing them) is less likely to lead to horrible outcomes if it's done in a stable region, where law-and-order is well-established in the neighboring countries and there are unlikely to be any major disruptive events in the region during the military engagement or the decade after it has ended.

How about actually removing your troops in an orderly fashion, rather than cause negotiations to fail over a minor technical matter and remove the troops all at once.

I a... (read more)

Can Economics Change Your Mind?

Economics is sometimes dismissed as more art than science. In this skeptical view, economists and those who read economics are locked into ideologically motivated beliefs—liberals versus conservatives, for example—and just pick whatever empirical evidence supports those pre-conceived positions. I say this is wrong and solid empirical evidence, even of the complicated econometric sort, changes plenty of minds.

Can economics change your mind?

Where to start? I could write a whole ongoing blog on this question (wait…). In

... (read more)

Carol Dweck on fixed vs. growth mindsets

In terms of theory, I'm not sure if fixed vs. growth mindset is the best way to describe the comparison. I feel like there should be a better way to more precisely define the two concepts, but I'm not sure exactly how. I think the research is useful still despite my concerns although you're more than welcome to argue it isn't. Anyway, I've been wondering about this in terms of LessWrong. Does LessWrong as a community have a fixed-mindset? The praising for being smart vs. praising for effort distinction used made... (read more)

If I try to quickly taboo the words "fixed mindset" and "growth mindset", the essential question is probably this:

Is the person aware (not verbally, but on the gut level) that their own skills could improve in the future, or do they implicitly assume that their skills will always stay the same?

It is a bit more complicated than this. For example, the person may deny the possibility of growth by refusing to classify something as a "skill", because merely reframing something as a "skill" (as opposed to a "trait") already suggests the possibility of improvement. For example, one person would say "I am introverted" where another person would say "my social skills of dealing with strangers are not good enough (yet)". In other words, the person may reject not just the possibility of improving their own skill, but the idea of the trait being modifyable in general.

Also, this doesn't have to apply generally. For example a stereotypical nerd may assume that you are able to learn programming, but that social skills are innate; while another person may assume that social behaviors are learned, but the talent to understand mat... (read more)

I think that's a good definition of the theory as Carol Dweck would define it; I'm just not so sure that's the best definition of the experimental results. For instance, what precisely is gut level awareness? How would I test it experimentally if they can't vocally express this awareness? Is the fixed mindset due to unawareness of the ability to improve or due to a desire to stay the same? Is it that the individual is aware they can improve, but simply is overestimating their own probability of getting worse or underestimating their probability of getting better? Is it an issue of avoidance of failure or is it a failure to approach goals? If I was to define the two terms, I might use something like: fixed-mindset - When individuals are praised for their attributes, they are more likely to engage in behaviors intended to display or protect those attributes. growth-mindset - When individuals are praised for their effort, they are more likely to engage in behaviors intended to improve their attributes. But that's rough. I'm not familiar with all the studies on the subject.
Just like you can run implicit racism tests I think you likely also can run texts where you let participants read various statements and measure their reactions. I think that points to part of the experiments but it doesn't explain the whole concept.

Staying in the present’ is popular pop-psychology prescription. The evidence suggests a different and more sophisticated attitude to time:

... Zimbardo believes research reveals an optimal balance of perspectives for a happy life; commenting, our focus on reliving positive aspects of our past should be high, followed by time spent believing in a positive future, and finally spending a moderate (but not excessive) amount of time in enjoyment of the present.

-Wikipedia on Zimbardo

So instead of living in the present, try living in the positive aspects of t... (read more)


I want to determine whether I ought to have children or not based on the consequences for the population, my child(ren) and me personally.

I reckon the demographic factor that is most relevant to this choice is my status as a mentally ill person.

My decision cycle lasts from now till my prime fertile years (till I’m 35).

I will have kids if:

The consequences for the population is good. If existing evidence suggests population growth is good then the consequences for population growth is good. Population growth is basically good. There may be some non-linear... (read more)

Re the good of the child, do you think your life is net-positive for you? If so, and if you think your child's life is likely to be about as good as yours, that suggests their life will also be net-positive for them.
Watch your baseline: you should not consider the benefits that you and your child might get vs. not having children, but rather, the benefits you and the child might get vs. the benefits that you and another child might get if you did not have a child but became involved in a mentor program (or other volunteer activity helping children). It may be hard to determine the value you get through working with other people's children, but there are big two plus sides to doing so: 1. you have a comparative advantage for a certain population of kids; those with mental illnesses may benefit especially from an adult who has experienced something like what they are going through and 2. you can experiment to determine the value you get from a mentor program much more easily (or rather, with much lower cost) than you can experiment with having your own kids -- and it makes good sense to try the low cost experiment before you run any final calculations.
I think the cost of children is a factor in the psychological well-being of the parents, so it's double counting to treat those as separate items. More to the point, you are not an average. While the effect of a child is slightly negative for the average parent, parents will vary widely in the effect of their own children on their life. If you are wealthy, in a stable marriage, and knowledgeable about parenting, then I would expect children to be net-positive for your well-being. I think a lot of the negatives of children stem from poor decision-making by the parents which leads to unnecessary stress.
Yes. You should be able to easily take care about yourself (financially, logistically, emotionally), before you accept the burden of taking care about someone else who cannot reciprocate in the following few years. Imagine that you have less time, less energy, and less money; every day, for the following few years. Plus some unexpected problems appearing randomly. This is how it is when you have a baby. In return you get a cute little person that is similar to you, loves you more or less unconditionally (unless you really screw up), and "becomes stronger" visibly every month. That can be hugely emotionally rewarding. However, that emotional reward doesn't change the fact that you still have less time, less energy, and less money. So if something was a problem before, it will become much greater problem with the baby. That also includes the possible problems with the relationship: now the partners have more stress, and less time to talk or have sex (which are the two typical methods to solve interpersonal problems).

Can you think of any good reason to consult any so called psychic?

Can you think of any good reason to consult any so called psychic?

I can think of a good reason for anything. I ask my brain "conditional upon it being a good idea, what might the situation be?" and the virtual outcome pump effortlessly generates scenarios. A professional fiction writer could produce a flood of them. Try it! For any X whatever, you can come up with answers to the question "what might the world look like, conditional upon X being a good idea?" For extreme X's, I recommend not publishing them. If you find yourself being persuaded by the stories you make up, repeat the exercise for not-X, and learn from this the deceptively persuasive power of stories.

Why consult a psychic? Because I have seen reason to think that this one is the real deal. To humour a friend who believes in this stuff. For entertainment. To expose the psychic as a fraud. To observe and learn from their cold reading technique. To audition them for a stage act. Because they're offering a free consultation and I think, why not? (Don't worry, my virtual outcome pump can generate reasons why not just as easily as reasons why.)

What is the real question here?

You got me, there was no real question. It was all made up for fun. It would be fun to know of a rationalist's experience and interpretation or desire to visit a psychic and whatever unusual circumstances and reasoning led them to it.
Even conditional on someone having those experiences I find it unlikey that the person would write an reply to a question on LW that posed as the question above.
Thanks for the feedback.

Cold reading and externalizing your unconscious thoughts so as to allow conscious consideration thereof are very useful things sometimes. As can sometimes be manipulating symbols so as to deeply seat changes to said thoughts. There's a great big grab-bag of tricks that human societies have come up with to do these things over the millennia including some activities practiced by the more interesting subsets of individuals using that label.

There are spaces within the occult philosophy scene that effectively say, for example, things like that coincidences and synchronicity are very important to pay attention to because noticing what you find synchronistic and interestingly-coincidental about the swirling morass of the world around you reveals what you're actually preoccupied by and how you really feel about things. Or that by forcing yourself to have unconscious emotional reactions to fairly random charged symbols and trying to interpret what they mean to you (think tarot), you gain a better appreciation of what is important to you.

Good ones are good judges of character. Might want to befriend one rather than be a client, though.
Follow up question: has anyone on LessWrong ever actually consulted a psychic for any reason?
I'm technically on LessWrong and my biggest reason was desperation. I could PM you most of their answer if you're curious.
Don't PM me! This account is public, anyone can use it.
I can think of a good reason for anything. I ask my brain "conditional upon it being a good idea, what might the situation be?" and the virtual outcome pump effortlessly generates scenarios. A professional fiction writer could produce a flood of them. Try it! For any X whatever, you can come up with answers to the question "what might the world look like, conditional upon X being a good idea?" For extreme X's, I recommend not publishing them. Why consult a psychic? Because I have seen reason to think that this one is the real deal. To humour a friend who believes in this stuff. For entertainment. To expose the psychic as a fraud. To observe and learn from their cold reading technique. To audition them for a stage act. Because they're offering a free consultation and I think, why not? (Don't worry, my virtual outcome pump can generate reasons why not just as easily as reasons why.) What is the real question here?

Could a hypothetical being exist That is so sensitive to harms and good m, and experienced such extremes of harm and good, that altruistic people would be best served by dedicating themselves to the service of that one being?

It's not news to anyone that it's pretty easy to screw up consequentialists. The lesson I take from this is this: "maximize to solve a particular problem, rather than as a lifestyle choice."
Is that a solution to a particular problem, or a lifestyle choice?
It's a solution to a problem of bad (underspecified) ethics. The lifestyle choice I am referring to here is "MAXIMIZE ALL THE THINGS." But of course ethics is hard to fully specify because human minds are involved. It's hard to have models of those. Most of the specification work, the dominating term, is in the most difficult to model part. In this sense I think virtue ethics is playing in the right stadium. They are trying to describe things in terms of the part of the problem that is hardest to model.
Is that a lifestyle choice?
You are describing a utility monster, I believe.
Felix: http://www.smbc-comics.com/?id=2569
Richard Stallman could be that kind of man, although he prefers that people be informed thinkers rather than the following servant type.
To be pedantic, any hypothetical being can exist, as long as it remains hypothetical. What's the real question here?

Direct impact careers - the topic EA's often skirt around. I'm somewhat disturbed by one of the first exposures to EA: that medicine is not an effective, altruistic career whatsoever because the shear supply of people interested and capable of becoming doctors is so great (even after artificial restriction). It seems rather theoretical. What is the the economic term for 'replaceability of workers by the labour workforce'? It would be something like a human equivelant of fungibility, with perhaps some element of 'elasticity'. I'd want to see empirical work in this area.

I'm not sure whether the "even after" makes sense in that sentence. There are a lot of interested and capable people applying to med-school If you get into medschool that means that one of those people won't get into med-school. On the other hand if you become a skilled programmer you don't take a job away from anyone. That's why 80,000hour recommends people to rather become a programmer at a startup then to study medicine.
I'm sure someone else can answer this better, but it sounds like you're asking for "empirical work," but aren't willing to explain why you're unsatisfied with the empirical work that you can find by searching websites like GiveWell and 80000 Hours.
I actually just forget to recheck those sources. It's happened with one of my recent posts too. I suppose it's just a habit I ought to pay more attention to. Though, it could be missing from 80k too.

My favourite word of the day: "edgelord-twee". (from here)


If you are analysing survey data about gamblers attitudes to laws about casinos, that couldn't be specified in PICOT format right?