If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Previous Open Thread

Next Open Thread

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
303 comments, sorted by Click to highlight new comments since: Today at 6:05 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

In my small fourth grade class of 20 students, we are learning how to write essays, and get to pick our own thesis statements. One kid, who had a younger sibling, picked the thesis statement: "Being an older sibling is hard." Another kid did "Being the youngest child is hard." Yet another did "Being the middle child is hard", and someone else did "Being an only child is hard." I find this as a rather humorous example of how people often make it look like they're being oppressed.

Does anyone know why people do this?

Be charitable; don't assume they're trying to present themselves as martyrs. Instead they could be outlining the peculiar challenges and difficulties of their particular positions.

Life is hard for everyone at times.

Anybody should be able to write an essay "why my life is hard." They should also be able to write an essay "why my life is easy." It might be a great exercise to have every student write a second essay on a thesis which is essentially the opposite of the thesis of their first essay.

I wouldn't ascribe conscious intent to their actions, but it may be that making your own life seem harder is an evolved social behavior. Remember, humans are adaptation-executors, not fitness-maximizers, so it's entirely possible that the students thought they were being honest, when in fact they may have been subconsciously exaggerating the difficulties they were facing in day-to-day life. Related: Why Does Power Corrupt?

One kid, who had a younger sibling, picked the thesis statement: "Being an older sibling is hard." Another kid did "Being the youngest child is hard." Yet another did "Being the middle child is hard", and someone else did "Being an only child is hard." I find this as a rather humorous example of how people often make it look like they're being oppressed.

Taken at face value, the four statements aren't incompatible. Saying that being X is hard in an absolute sense isn't the same as saying that being X is harder than being Y in a relative sense, or that X people are being oppressed.

Sure, but the point is that the same argument applies to the flipside: everyone could've written essays like "X is fun" or "Y is fun" without contradiction. But they chose "hard" instead. Why?
There were sixteen other students in the class. For all we know, theses about fun things could have been in the majority. If you accept what I wrote in the GP, where do you see a contradiction in the four statements? And if you don't, could you try to articulate why?
Yeah, maybe. No, no I don't think you had a contradiction either. I was just saying that you could do the same thing with "fun." And maybe other kids did, as you say.
It is much easier to notice the things in your situation that don't go well than notice all the things that happen in someone else's situation. I'm curious; have you pointed this out to the students? If so, how did they react?

Alex Miller, my son, is one of the students.

Ah, that clarifies that. I think I read "we are learning" as the teacher saying that since I've seen teachers use that language (e.g. "next week we'll learn about derivatives").

Alex greatly enjoyed being mistaken for his teacher.

So nice that you two are able to enjoy LessWrong together. Given that this is an open threat, is there anything you (or Alex) would like to share about raising rationalists? My daughters are 3yo and 1yo, so I'm only beginning to think about this... EDIT: I made a top-level post here.
Alex loves using rationality to beat me in arguments, and part of why he is interested in learning about cognitive biases is to use them to explain why I'm wrong about something. I have warned him against doing this with anyone but me for now. I recommend the game Meta-Forms for your kids when they get to be 4-6. When he was much younger I would say something silly and insist I was right to provoke him into arguing against me.
Has anyone gotten their parents into LessWrong yet? (High confidence that some have, but I haven't actually observed it.)
The more you can blame whatever difficulties and frustrations you have on things outside your control, the less you have to think of them as your own fault. People like to think well of themselves.
Each experience has its own difficulties that are unknown unless you've lived it.
Corollary: one's own difficulties always seem bigger than everyone else's.
A lot of times different ways that people act are different ways of getting emotional needs, even if that isn't a conscious choice. In this case it is likely that they want recognition and sympathy for different pains they have have. Or, it's more likely the case that the different hurts they have (being lonely, being picked on, getting hand-me-downs, whatever) are easily brought to mind. But when the person tells someone else about the things in their life that bother them, it's possible that someone could say "hey, it sounds like you are really lonely being an only child" and they would feel better. Some different example needs are things like attention, control, acceptance, trust, play, meaning. There is a psychological model of how humans work that thinks of emotional needs similar to physical needs like hunger, etc. So people have some need for attention, and will do different things for attention. They also have a need for emotional safety, just like physical safety. So just like if someone was sitting on an uncomfortable chair will move and complain about how their chair is uncomfortable, someone will do a similar thing if their big brother is picking on them. Another reason people often make it look like they are being oppressed is that they feel oppressed. I don't know if you are mostly talking about people your age, or everyone, but it is not a surprise to me that lots of kids feel oppressed, since school and their parents prevent them from doing what they want. Plenty of adults express similar feelings though, i just expect not as many.
Maybe they are friends and discussed their thesis topics with each other. I find it unlikely that 4 out of 20 students would come up with sibling related topics independently.
Or maybe they picked them out loud in class, and some of those were deliberate responses to others. So what happens is: Albert is an oldest child whose younger sister is loud and annoying and gets all the attention. He says "I'm going to write about how being an older sibling is hard". Beth is a youngest child whose older brothers get all the new clothes and toys and things; she gets their hand-me-downs. She thinks Albert's got it all wrong and, determined to set the record straight, says "I'm going to write about how being the youngest child is hard." Charles realises that as a middle child he has all the same problems Albert and Beth do, and misses out on some of their advantages, and says he's going to write about that. Diana hears all these and thinks, "Well, at least they have siblings to play with and relate to", and announces her intention to explain how things are bad for only children. Notice that all these children may be absolutely right in thinking that they have difficulties caused by their sibling situation. They may also all be right in thinking that they would be better off with a different sibling situation. (Perhaps there's another youngest child in the class who loves it -- but you didn't hear from him.)
Yeah, that sounds like the most likely possibility actually.
If it's given how successful you were, it looks better if it was under worse circumstances. Thus, people benefit from overstating their challenges. Since people aren't perfect liars, they also overestimate their challenges.
Because running in the oppression olympics is the easiest way to gain status in most western societies. Looks like even children are starting to realise that, or maybe they're being indoctrinated to do so in other classes or at home.

I would like to point out that this is the only comment in the thread that doesn't assume that this behavior is culturally invariant, and suggest that the rest of LW think about that for a while.

I think the term "oppression olympics" is needlessly charged. But it is a good question: Under what conditions will someone voice a complaint, and about what? We learn early on that voicing certain complaints results in social punishment, even when those complaints are "valid" according to the stated moral aspirations of the community. If memory holds, the process of learning which complaints can be voiced is painful. But at the same time, not all superficially negative self-disclosures are a true social loss: Signaling affliction seems to have been a subcultural strategy for quite a while, nowadays in teenagers, but we also have famous references to the over-the-top displays of grief and penitence from ancient (Judeo-Christian) cultures. And of course, complaints can also result in support, or can play a role in political games. So there's a cost-benefit happening somewhere in the system, which we might hope to be reasonably specific about. To touch on some controversies: There's a big push to reduce the dissonance between what we publicly accept as grounds for complaint and what we actually punish people for complaining about. Accepting for the moment that our stated principles are okay (which is where I expect you might disagree), this can still go wrong several ways: 1. People may mistake the aspiration for reality e.g. we tell kids they should complain about bullying and feel like we're making progress, but then we allow the system to punish kids just as harshly as ever after their disclosure, because we can't or won't change it. 2. Or we feel that offering non-complaint-based advice is perpetuating or accepting a discrepancy between "valid complaints" and "effective complaints", e.g. the outcry when someone suggests a concrete way to avoid being sexually assaulted, or voices a concern about "victim mentality" (the mistake of thinking that complaining is more effective than it really is, often because everyone is only pr
This is not a good thing to accept, since the stated principals are themselves subject to change. Hence 5. Once society starts taking complaint X seriously enough to punish the perpetrator, people start making (weaker) complaint X'. Once society takes that complaint seriously people start making complaint X'', etc. I would argue that long term 5. is actually the biggest problem.
I think we need to separate complaints of the "what you did was not against the rules but it still hurt me" and "you violated the rules, and hurt me through that". The second complaint is very powerful. The first one requires high amounts of compassion in the other person to work. I mean, extrinsic motivation replaces intrinsic motivation. This means, while with a complete lack of rules people may - may - be compassionate, if Behavior No. 11 is forbidden under threat of punishment because it hurts others, then people will care more about that it is forbidden and they can get punished for, rather than about the hurt it causes to others. For example the fact that rape carries heavy prison sentences reduces compassion for rape victims: see victim-blaming and related behaviors. It simply turns the discussion away from "Does Jill feel hurt from what John did?" towards "Is John really evil enough for five years in prison?" and then if not, then it is so easy write off Jill's hurt. But the catch is, if Behavior No. 11b is sufficiently similar but not expressly forbidden, the rule and punishment for Behavior No. 11 may still prevent compassion towards its victims, even in people who would have compassion towards the victims of behavior that are entirely unregulated. And that is how it requires extraordinary compassion to give a damn about "what you did was not against the rules but still it hurt me". Modern societies are so strongly regulated by both law and social pressure that almost any kind of hurt will at least resemble a different hurt that is forbidden hence the intrinsic compassionate motivation lost. And that is why people who are not extremely compassionate give no damn about e.g. accusations of misgendering. It sounds roughly like the rules of politeness learned in childhood i.e. you will address the neighbor as "good morning Mr. Smith" not "hi old fart" or get punished. Since this sounds similar, but there is no such actual rule that is enforced, not extrem
How about the question "Is it reasonable for Jill to fill hurt from what John did?", otherwise you're motivating Jill to self-modify into a negative utility monster.
This sounds simple enough, but I think this is actually a huge box of yet unresolved complexities. A few generations ago where formal politeness and etiquette was more socially mandatory, the idea was that the rules go both ways: they forbid ways of speaking many people would feel offended by, on the other hand, if people still feel offended by approved forms of speaking, it is basically their problem. So people were expected to work on what they give and what they receive (i.e. toughen up to be able to deal with socially approved forms of offense): this is very similar how programmers define interface / data exchange standards like TCP/IP. Programmers have a rule of be conservative in what you send and be liberal in what you accept / receive (i.e. 2015-03-27 is the accepted XML date format and always send this, but if your customers are mainly Americans better accept 03-27-2015 too, just in case) and this too is how formal etiquette worked. As you can sense, I highly approve of formal etiquette although I don't actually use it on forums like this as it would make look like a grandpa. I think a formal, rules-based, etiquette oriented world was far more autism-spectrum friendly than todays unspoken-rules world. I also think todays "creep epidemic" (i.e. lot of women complaining about creeps) is due to the lack of formal courting rules making men on the spectrum awkward. Back then when womanizing was all about dancing waltzers on balls it was so much more easier for autism-spectrum men who want formal rules and algorithms to follow. I think I could and perhaps should spin it like "lack of formal etiquette esp. in courting is ableist and neurotypicalist". Of course, formal etiquette also means sometimes dealing with things that feel hurtful but approved and the need to toughen up for cases like this. Here I see a strange thing. Remember when in the 1960's the progressive people of that era i.e. the hippies were highly interested in stuff like Zen? I approve of th
Depends on what one means by "rape". If you are using the standard definition from ~20 years ago (and for all I know still the standard definition in your country), I agree. However, recently American feminists have been trying to get away with calling all kinds of things "rape".
I actually know a woman who was a nice and reasonable human being, and then had a very nasty break-up with her boyfriend. Part of that nasty break-up was her accusations of physical abuse (I have no idea to which degree they were true). This experience, unfortunately, made her fully accept the victim identity and become completely focused on her victim status. The transformation was pretty sad to watch and wasn't good for her (or anyone) at all.
I would argue that the sentimental compassion it exploits is a very specifically American feature and it is less efficient elsewhere. If I had to guess, American culture has uniquely selfish subsets (such as the Ayn Rand fans), and as a reaction, the opposite shine-with-goodness attitude evolved which then gets exploited. What you see is the middle ground missing, probably. A good example is middle-class people seeing the welfare state either sentimentally as hearts going out for the poor, or the judgemental "bunch of lazy leeches" view which are both moralistic. What middle ground is missing is the simple "customer" attitude to the welfare state "well, I might need it any time, better make sure it works right, potentially for ME" which is the most common European attitude. This middle ground is missing, because there is a tribe that derives identity from shining-with-goodness, and another tribe from selfishness, usually interpreting selfishness as toughness. Both can be exploited. Oppression olympics exploits the shine-with-goodness tribe and shit like not even a year of paid maternity leave exploits the my-selfishness-is-toughness tribe. But I think in Western societies who go for the middle in things like this, oppression olympics e.g. complaining about misgendering generally gets answers roughly like "But I am just doing what the rules and social customs permit / prescribe?" with the connotation "Why exactly would I care about your personal feelings?"
Related post: http://lesswrong.com/lw/9b/help_help_im_being_oppressed/

Since Eliezer has forsaken us in favor of posting on Facebook, can somebody with an account please link to his posts? His page cannot be read by someone who is not logged in, but individual posts can be read if the url is provided. As someone who abandoned his Facebook account years ago, I find this frustrarting.

Why would you not create a sockpuppet facebook account for the purposes of reading posts you want to read?
Not speaking for above poster: because that's not actually trivial - you need a real fake phone number to receive validation on, etc. Also, putting fake data into a computer system feels disvirtuous enough to put me off doing it further.
facebook still does not have my phone number. Not sure what you did to need a phone number verification...
I misremembered, you are correct. I was possibly instead frustrated with finding a temporary email that it would accept (they block the most common disposables I think).
Interesting. I consider poisoning big surveillance/marketing databases to be virtuous X-D
I don't like to frustrate the poor databases' telos, it is not at fault for the use humans put its data to. (Yes, I realise this is silly. It's still an actual weight in the mess I call a morality; just a small one.)
The database is only techne in that context, its own telos lies in maintaining nice tables and properly responding to queries -- things I do not mess with :-)

There seem to be some parents (and their children) here. I myself am the father of 3yo and 1yo daughters. Is there any suggestions you have for raising young rationalists, and getting them to enjoy critical, skeptical thinking without it backfiring from being forced on them?

Julia Galef, President and Co-founder of the Center for Applied Rationality, has video blogged on this twice. The first was How to Raise a Rationalist Kid, and the second is Wisdom from Our Mother, which might be a bit more relevant to you because, in that video, her brother Jesse specifically discusses what his mother did in situations where he wasn't enthusiastic about learning something. I should say that it has more to do with when your kids think that they're bad at things than with when they reject something out of hand. To that I would say, and I think many others would say: Kids are smart and curious, rationalism makes sense, and if they don't reject everything else kids have learned throughout history out of hand, then they probably won't reject rationalism out of hand.

I also am the father of 3yo and 1yo daughters. One of the things I try to do is let their critical thinking or rationality actually have a payoff in the real world. I think a lot of times critical thinking skills can be squashed by overly strict authority figures who do not take the child's reasoning into account when they make decisions. I try to give my daughters a chance to reason with me when we disagree on something, and will change my mind if they make a good point.

Another thing I try to do, is intentionally inject errors into what I say sometime, to make sure they are listening and paying attention. (e.g. This apple is purple, right? ) I think this helps to avoid them just automatically agreeing with parents/teachers and critically thinking through on their own what makes sense. Now my oldest is quick to call me out on any errors I may make when reading her stories, or talking in general, even when I didn't intentionally inject them.

Lastly, to help them learn in general, make their learning applicable to the real world. As an example, both of my daughters, when learning to count, got stuck at around 4. To help get them over that hurdle, I started asking them questions like, "How many fruit snacks do you want?" and then giving them that number. That quickly inspired them to learn bigger numbers.

This sounds like solid parenting; my only concern is that you might not be taking the psychology of children into account. Children sometimes really do need an authority figure to tell them what's true and what isn't; the reason for truth is far less important at that stage (and can be given later, maybe even years later). One issue that could arise is that if you don't show authority then your child may instead gravitate to other authority figures and believe them instead. A child may paradoxically put more faith in the opinions of someone who insists on them irrationally than someone who is willing to change their beliefs according to reason or evidence (actually, this applies to many adults too). It's possible that "demeanor and tone of voice" trumps "this person was wrong in the past." The point is that children's reasoning is far far less developed than adults and you have to take their irrationalities into account when teaching them.
The best thing about my Catholic high school was that it was run by the Salesian Order, which prefers a preventive method based on always giving good reasons for the rules.

[This isn't a direct response to Mark, but a reply to encourage more responses]

To add another helpful framing, if you don't have children, but think as an adult part of your attraction to LessWrong was based on how your parents raised you with an appreciation with rationality, how did that go? Obvious caveats about how memories of childhood are unreliable and fuzzy, and personal perspectives on how your parents raised you will be biased.

I was raised by secular parents, who didn't in particular put a special emphasis on rationality when raising me, compared to other parents. However, for example, Julia and Jesse Galef have written on their blog of how their father raised them with rationality in mind.

Thanks for the call to action. In my own case I became a rationalist in spite of my upbringing. So people like me who don't have that background could really use advice from those who do :)
They left Scientific American lying around a lot. The column that had the fewest prerequisites was Michael Shermer's skepticism column. Also, people around me kept trying to fix my brain, and when I ran into cognitive bias and other rationality topics, they were about fixing your own brain, so then I assumed that I needed to fix it. In terms of religion stuff: My parents raised me with something between Conservative and Reform Judaism, but they talked about other religions in a way that implied Judaism was not particularly special, and mentioned internal religious differences, and I got just bored enough in religious services to read other parts of the book, which had some of the less appealing if more interesting content. (It wasn't the greatest comparative religious education: I thought that the way Islam worked was that they had the Torah, the New Testament, and the Qur'an as a third book, sort of the way the Christians had our religious text as well as the New Testament as a second book.)
Thank for putting up this branch Evan, I don't have children. I think my raising helped my rationality, but the lens of time is known to distort, so take it with a grain of salt. Most of my rationality influence was a lead by example case. Accountability and agency were encouraged too, they may have made fertile soil for rational thought. Ethics conversations were had and taken seriously (paraphrase: 'Why does everyone like you?' 'Cause I always cooperate' 'Don't people defect against you?' 'Yes, but defectors are rare and I more than cover my losses when dealing with other cooperators'). Thinking outside the box was encouraged (paraphrase: 'interfering the receiver is a 10 yard penalty, I can't do that.' 'What's worse, 10 yards or a touchdown?' 'But it is against the rules.' 'Why do you think the rule is for only 10 yards, and not kicked from the game? Do you think the rule, and penalty, are part of the game mechanics?'). Goal based action was encouraged, acting on impulse was treated as being stupid (paraphrase: 'Why did you get in a fight' 'I was being bullied' 'Did fighting stop the bullying?' 'No' 'Ok, what are you going to try next?').
I am also father of four boys now 3, 6, 8 and 11. You can find some parenting resources linked on my user page.
I know of families who have used the "tooth fairy" as an opportunity to do critical thinking. I think it has gotten mentioned here before. Apparently sometimes children do this on their own. This post is relevant.

Something I frequently see from people defending free speech is some variant of the idea "in the marketplace of ideas, the good ones will win out". Is anyone familiar with any deeper examination of this idea? For instance, whether an idea market actually exists, how much it resembles a marketplace for goods, how it might reliably go wrong, etc.

I think you're better off looking into theories of memetics; that is, a marketplace doesn't seem to be as good an analogy as an ecology. That makes the somewhat less cheery argument that 'good' doesn't mean 'true' so much as 'effective at spreading,' and in particular memes can win by poisoning their competitors through allelopathy, just like an oak tree.

This video is somewhat on topic: The New (and Old) Attacks on Free Thought: Jonathan Rauch on Kindly Inquisitors Jonathan Rauch discusses the new edition of his book, Kindly Inquisitors, and presents a thoughtful and rational defense of free speech. I believe he makes some comparisons between the marketplace of ideas and economic markets and he certainly makes an argument similar to the one that you mention. It is an excellent video, IMO, and well worth watching.
There is a method of devalueing the weight of ones words by refering how them saying it doesn't have any actual implications to action. In a free speech environment people can become decoupled from their ideas implications. Epistemic authors are usually reliable because they have passed a filter for errors. If there is no filter on error there is no measure of quality. This can easily turn in that no public shared filter is wanted and everybody is supposed to use their own filter. This has one failure mode of everybody being entirely on their own when it comes to interpreting information, ie that education is not only not provided but it would be wrong to provide it. One also has to realise that in a market place of ideas bad ideas lose out by going bankrupt. The united states is kind of the home of capitalism but it pansies out when the laws of market would require it's big banks to fail. So instead of natural death artificial economical support is provided. In the same way you would need to actually watch coolly when stupid people are being stupid and hurting themselfs. That is an idea either blooms or goes bust and if "goes bust" means results in injury to your health or sanity you are just supposed to live with it. Or rathet than giving or requiring each person a pretty basic but universally provided methodology of learnign about the world you rather rely on multiple biases canceling each other out or clustering the world into different audiences. A market place phasing might also be about information control. Fox news has a bad name for coloring news and in general being stupid and with an agenda. However it might be worth it for americans to rather hear about the outside world than avoid the spin and not hear about it. The idea of a filter bubble is also relevant. There could also be stresses on meaning and communication. If no consistency of concepts is upkept the end result might be a tower of Babel islands of non-communicating schools of thought. This can
Here's Scott Alexander discussing this concept in the context of lifehacks: http://slatestarcodex.com/2014/03/03/do-life-hacks-ever-reach-fixation/

Recently, there has been talk of outlawing or greatly limiting encryption in Britain. Many people hypothesize that this is a deliberate attempt at shifting the overton window, in order to get a more reasonable sounding but still quite extreme law passed.

For anyone who would want to shift the overton window in the other direction, is there a position that is more extreme than "we should encrypt everything all the time" ?

Assuming you just want people throwing ideas at you:

Make it illegal to communicate in cleartext? Add mandatory cryptography classes to schools? Requiring everyone to register a public key and having a government key server? Not compensating identity theft victims and the like if they didn't use good security?

This is already the case in Estonia, where every citizen over the age of 14 has a government-issued ID card containing two X.509 RSA key pairs. TLS client authentication is widely deployed for Estonian web services such as internet banking. (Due to ideological differences regarding the centralization of trust, I think it's unlikely that governments will adopt OpenPGP over X.509.)
Giving people an official RSA keypair in their smartcard government IDs is fine. That solves all sorts of problems, and enables a bunch of really cool tech. Requiring that every public key used in any context be registered with the government, or worse, some sort of key escrow, is a totally different matter.
I was thinking less "everyone must register all their public keys, and you can't have a second identity with its own key" and more "everyone has to have at least 1 public key officially associated with them so that they can sign things and be sent stuff securely." And that Estonian system sounds pretty cool.
What would you estimate the probability of ever having the former without the latter being? Of having that happy state last for more than a few years?
Well the former pretty much describes the current state of affairs. Anyone with a government ID card or national healthcare ID probably has a chip embedded with an escrowed signing key. There's really nothing unique about Estonia here -- they're using the same system everyone else is using. Even if your country, like the USA, doesn't have a national ID of some kind or doesn't have a chip embedded, your passport does. The international standard governing "smart passports" being issued by just about every country in existence for the past 5-10 years includes embedded digital signature capability. Now I don't really know how to estimate the probability of sliding into the latter case. I don't see them as intrinsically connected however.
Generating private/public key pairs is trivially easy.

Frame attempts to limit the use of encryption as unilateral disarmament, and name specific threats.

As in, if the government "has your password", how sure are you that your password isn't eventually going to be stolen by Chinese government hackers? Putin? Estonian scammers? Terrorists? Your ex-partner? And you know that your allies over in (Germany, United States, Israel, France) are going to get their hands on it too, right? And have you thought about when (hated political party) gets voted into power 5 years from now?

A second good framing is used by the ACLU representative in the Guardian article: You won't be able to use technologies X Y and Z, and you'll fall behind other countries technologically and economically.

To be a bit more specific than "we should encrypt everything all the time": Mandatory full-disk encryption on all computer systems sold, by analogy to mandatory seat belts in cars — it used to be an optional extra, but in the modern world it's unsafe to operate without it.
The criminalization of all encryption in the U.S. is just one big terrorist attack away.
Doubtful. Too much of the economy takes place online today - you can't have e-banking without strong crypto.
You can have e-banking and e-commerce with "key escrow", though. That didn't fly in the 90s, and it's always been an inane idea, but I could definitely imagine "you should hide from hackers, but not from the police" PR spin ramping up again.
It already did -- see David Cameron's new stance on encryption e.g. here or elsewhere. He's not shy about it.
True. That said, the Internet has proven very good at defending its essential infrastructure, and I suspect it will continue to do so in future.
Good point. I revise my prediction to "after the next big terrorist attack the U.S. will heavily regulate encryption."

Just thought of something. If you want to talk about variation and selection but you can't say 'evolution' without someone flipping a table, then talk about animal husbandry instead.

EDIT: Heh, turns out Darwin actually did this.

At one point there was a significant amount of discussion regarding Modafinil - this seems to have died down in the past year or so. I'm curious whether any significant updating has occurred since then (based either on research or experiences.)

(This is a repost from last week's open thread due to many upvotes and few replies. However, see here for Gwern's response.)

I meant to post something about my experience with armodafinil about a year ago, but I never got around to it. My overall experience was strongly negative. Looks like I did write a long post in a text file a day or so after taking armodafinil, so here's what I had to say back then:

Some background:

I'm a white male in my mid-20s. I have excessive daytime sleepiness, and I believe this is because I'm a long sleeper who has difficulty getting an adequate duration of sleep. There are several long sleepers in my family. My mother and I tend to not like how stimulants make us feel, e.g., pseudoephedrine makes us fairly nervous, though it will help our nasal congestion from allergies and help wake us up. I was interested in trying modafinil because I hear it has proportionally less of the negative effects compared against its wake-promoting effects.

My neurologist gave me a few samples of armodafinil, which is basically a variant of modafinil. I was busy in the month after I met my neurologist last and didn't think about taking it at all, but come mid-February I remembered to try it.

Saturday, Feb. 15, 2014:

I woke up at 8:30 am, as I usually did, and started eating a chocolate chip muffin fo... (read more)

Your main complaints about your drug experience seem to be (a) feeling unusual, (b) having some difficulty managing your attention, (c) feeling excessively fidgety, (d) louder tinnitus, and (e) sleep difficulty. As someone who has experimented with psychoactive drugs a fair amount, including modafinil, my impression is that (a) and (b) are pretty common with psychoactive drugs and are almost always transient and harmless (unless you're driving a car, biking, operating heavy machinery, etc.). ((c) is less common but definitely present with some, e.g. coffee. (d) and (e) are probably good reasons to stop using a particular drug.) In fact, I've gotten to the point where I consider feeling unusual and having my attention work differently to be fun, interesting experiences to observe and learn from. So my thought is that before trying modafinil, maybe people should experiment with small doses of strongly psychoactive drugs that don't have a 12-hour half life, perhaps in a safe & supervised environment, to learn that altered mental states aren't scary and can be pretty useful for certain tasks--they're like distinct mental gears you can enter using cheap, reliable external aids. (For example, drink half a cup of coffee, then a full cup of coffee, then two cups of coffee on separate days to know what it's like to be highly stimulated, and a cup of beer, two cups of beer, and four cups of beer on separate days to know what it's like to be highly disinhibited. Kratom is another highly useful but little known legal psychoactive; for example, this successful blogger primarily credits kratom with his success at building his online empire, and I'm not surprised at all given my kratom experiences... any resistance I have to doing tasks seems to just melt away on kratom.) (Disclaimer: I'm a foolish young person and maybe you should ignore everything I'm saying. Also if you really did experience stimulant induced mania you should probably follow the instructions on the label.)
Appreciate your response and perspective, hg00. I think smaller doses are prudent for people experimenting with these things. If I were to try armodafinil again, I would have cut the pill in half or even quarters. (I had no real choice in the pill dosage, as I only received a sample.) Though, in retrospect I think avoiding (ar)modafinil all together would be smart because the half-life is way too long. I'm basically straight-edge, though I'm open minded and willing to try some drugs if I think they might have a positive effect on me. I've only tried nootropics, and so far I have not been impressed. Either they do nothing or make me feel really strange. Others' experiences may vary. There doesn't seem to be anything here for me. At this point I have no intention of ever trying a drug for non-medical reasons. What I experienced isn't exactly clear, but, I didn't like what I experienced. In fact, it took several weeks for me to fully recover from taking armodafinil. After a few weeks or so I felt mostly normal, and a bit later the tinnitus finally died down. The latter isn't that unusual for my tinnitus, actually. After exposure to a loud noise I might have louder tinnitus for several weeks. (Not that mine ever is quiet. It doesn't bother me, but I imagine normal for me would drive most people nuts. It never goes away and probably will only ever get worse, and I accept that.)
Understood. I don't doubt your self-assessments, just wanted to provide a contrasting perspective. For tinnitus, you might want to try googling "tinnitus replacement therapy" or experimenting with ear/jaw/neck massage; both of these seem to have been helpful for me.
I've looked into tinnitus retraining therapy (I think this is what you meant) but decided I'm not bothered enough by my tinnitus to go that route. I'll keep it in mind if this changes. I have not heard about massage helping tinnitus. I'll have to give that a shot as I'm sure it would be enjoyable even without tinnitus relief. Otherwise, I've found noise machines to be helpful. Sometimes I also listen to a brown noise mp3 when working and I don't want to listen to music. I find that this totally masks my tinnitus, masks most ambient noises, and is rather pleasant (it sounds like a waterfall). (I want to note that my brother finds artificial noise to be worse than tinnitus, so your mileage may vary.) If you use Linux and have the right software installed you can run the following commands to generate a brown noise mp3: sox -c 2 --null out.wav synth 30:00 brownnoise vol -0.4dB fade t 3 30:00 lame --preset insane out.wav out.mp3
The core idea behind tinnitus retraining therapy is to listen to noise that doesn't totally mask the tinnitus but is more salient than it. The principle being that it helps you think of your tinnitus as background noise. Seems to work for me.
A month or two ago I started taking Modafinil occasionally; I've probably taken it fewer than a dozen times overall. I think I'd expected it to give a kind of Ritalin-like focus and concentrate, but that isn't really how it affected me. I'd describe the effects less in terms of "focus" and more in terms of a variable I term "wherewithal". I've recently started using this term in my internal monologue to describe my levels of "ability to undertake tasks". E.g., "I'm hungry, but I definitely don't have the wherewithal to cook anything complicated tonight; better just get a pizza." Or, on waking up: "Hey, my wherewithal levels are unusually high today. Better not fritter that away." (Semantically, it's a bit like the SJ-originating concept of "spoons" but without that term's baggage.) It's this quantity which I think Modafinil targets, for me: it's a sort of "wherewithal boost". I don't know how well this accords with other people's experience. I do think I've heard some people describe it as a focus/concentration booster. (Perhaps I should try another nootropic to get that effect, or perhaps my brain is just beyond help on that front.) I did, however, start to feel it suppressed my appetite to unhealthily, even dangerously, low levels. (After taking it for two days in a row, I felt dizzy after coming down a flight of stairs.) I realize that it's possible to compensate for this by making oneself eat when one doesn't feel hungry, but somehow this doesn't seem that pleasant. For this reason, I've been taking it less recently. I'd be curious to know whether others experience the appetite suppression to the same extent; it's not something that I hear people talk about very much. Perhaps others are just better at dealing with it than I am or don't care. It's also hard to say how much of its positive effects were placebo, given that I took it on days when I'd already determined I wanted to "get a lot of shit done". I might still try armodafinil at some point.
Huh, along with the low side effects, sounds like a candidate for a weight loss drug.
Yes, perhaps for some, but I'm already closer to underweight than I am to overweight, so for me that's a big con.
I wonder if activation energy is a good way of describing difficulties with getting started. Discussion of different kinds of werewithal
Yep, the model in that post is quite close to the one I'm trying to describe.
Mixed feelings. If you need wakefullness it's available on tap, but with a side of anxiety and trouble going to sleep later if your dosage is not perfectly calibrated.
I took modafinil twice. I'd been having problems staying awake during the day -- it's hard for me to sleep before 2am -- and those completely disappeared. I had more energy then than I've had in a while. No negatives. The only reason I haven't gotten more is that I don't have a mailing address. (Disclaimer: I drink a lot of coffee and tea, use a lot of snus, and drink like a relevant ethnic stereotype on weekends.)

I just started using the Less Wrong Study Hall. It's been great! I find myself to be more productive, and there's something fun about being amongst the company of other friendly people.

I don't have anything insightful to say. I'd just like to reiterate that:

1) It exists and you should consider using it (it seems that not too many people know about it).

2) I (and others) think that there should be a link to it in the sidebar.

Tell us about your feed reader of choice.

I've been using Feedly since Google Reader went away, and has enough faults (buggy interface, terrible bookmarking, awkward phone app that needs to be online all the time) to motivate me towards a new one. Any recommendations?

I use newsblur and it's fine, but I don't use bookmarking or an app or basically anything interesting.
After Reader was shut down, instead of trusting my RSS feeds to another always-online provider I decided to use local clients. I use dropbox to maintain the feed list and read status synchronised between all devices I need it on.
I've found Feedly on a browser is much more manageable than the Android app.
Feedly's default settings on the app are intolerable. It can be mostly fixed with settings changes though. I actually prefer the ap to the desktop now because I use it to pack dead time with reading my RSS feed instead of productive time.
I use rawdog. It runs on my computer and generates a single HTML file, which contains a nice unified list of articles (rather than the common alternative, a list of feeds which I then have to drill down into). It doesn't rely on any external services other than the feeds themselves. By diddling with the template it uses to generate the HTML, I have given it a little interactivity (e.g., I can tell it to "collapse" some feeds so that they show only article titles rather than content; I can then un-collapse individual articles). Last I checked, it didn't work on Windows but could be coerced into doing so by fiddling with the source code (it's in Python). There is a thing called Tiny Tiny RSS that, from what others have said, I suspect may offer kinda-similar functionality but better (with perhaps a bit more effort to get it set up initially). I keep meaning to check it out but failing to do so.
Interesting and thanks for the explanation. I have upvoted this comment, and other responses to the parent that actually gave reasons for choosing a particular feed reader.
I tried using RSS readers, but I tended to forget to check their websites or apps. I could have trained myself to check them more often but I ended up using https://blogtrottr.com/ instead. It sends RSS feeds to your email inbox, so I can check blogs along with my email in the morning. I haven't had any issues so far. They send you ads along with the feed to generate revenue. Having a revenue model is a solid plus in my book. What I don't like about it: they don't have accounts so managing subscriptions is a little hard.
I've tried TheOldReader, which worked well, even when they had to handle the sudden influx of Google Reader refugees. I'm currently using InoReader, which works very well, and Bloglines, which seems to be broken (for nearly a week now IIRC, and not for the first time in the last year).
Do you pay for The Old Reader?
IIRC I used it in the brief interlude between "we're hobbyists providing a little free service for people who aren't very happy with Google's latest changes" and "holy hell, they're shutting down Google Reader and our userbase just went up by an order of magnitude; we can't keep this site public anymore". IIRC I would have been willing to shell out $3/month for the service, but by the time that option opened up I'd discovered InoReader.
I switched to The Old Reader, which, as the name suggests, is pretty close to Google Reader in functionality.
I used Safari until Apple removed the RSS functionality, then switched to Vienna. OSX only.
I use Digg Reader. It does not have any social networking features, but otherwise it basically works like Google Reader did. For a while I was also using The Old Reader, but I switched away when it briefly looked like they were going to shut down. Digg Reader and The Old Reader seem very similar.
I simply use the wordpress.com reader (I have a blog that I update through there, so it consolidates the tools I use). I notice it tends to have a bit of a delay in getting new posts, but I don't mind not being absolutely to-the-minute up to date.
Digg is good for me.
I use RSS Feed Reader(Chrome plugin). It's been fairly good to me, though I have noticed a couple of my feeds disappearing over time. Unsure if this is due to abandonment by the feed admins or due to software issues. I'd still recommend it as a decent option, but I'd believe that better ones exist elsewhere.
I use Vienna.
I use Firefox's built-in "live bookmarks".

We're looking for beta testers for the 16th "annual" Microsoft puzzle hunt. Interested folks should PM me, especially if you're in the Seattle area.


Uhm. this is an rather weird way to describe how I think.. but, I feel like I've come full-circle. I'm automatically thinking of ways to optimize, automatically try to better understand the world about me. I'm reading LW articles and I sometimes think "yeah, I know about this".. I no longer feel the "Aha! How did I not realize this seemingly obvious thing I should have thought of already that hurts me nerd always-be-right ego!" but rather, I read mid-post and just feel like I know this stuff already.

Naturally, I still am not 100% perfect, but I still think I'm on the right path. I've been mostly a lurker and registered not long ago. Has anyone else gotten the same feel? This feeling isn't really backed up by anything other than having a "I know this already" thought.

Yes. Oftentimes people who played lots of games will describe the feeling as "leveling up," and it's a normal and desirable part of growth. This quote is relevant: it's important to not say "well, I've leveled up, no more growth necessary!", but instead always be on the lookout for the way to get to the next level. But the path that got you from level n-1 to n and the path that gets you from level n to level n+1 may be very different, and the restlessness that comes with feeling like you know this stuff is useful for getting you to look elsewhere. (I'm not saying that you're "done with LW," but I do think you're "done with lurking" and I think that you've done the right thing by registering; it makes for different kinds of interaction, which leads to different kinds of learning.)
I don't have a link, but something like this was already mentioned on LW... when you have already mastered some kind of thinking, it seems "obvious", even if it seemed original and awesome when you were reading it for the first time. Although, this only proves that you have become more familiar with LW style of thinking. It does not automatically follow that "LW style of thinking" is "rationality". (Although I personally believe it is related.)
Well, that's a nice thing to point out. Was there any research to how many lives were effectively changed by LW? Also, have anyone else got the feeling that there's some sort of innate rationality? It's the same thing as the awesome flare you feel when the seemingly obvious things are pointed out. I probably wouldn't be thinking like this if it wasn't for anything LW-esque. (Maybe LW has something unique going for it?) Maybe it's something unique for me - but sometimes I feel certain things inside me were either locked or repressed, or in the case of actions, misguided.
No, only anecdotal evidence.
I haven't "come full-circle", but I've had a similar experience. I haven't read all of LessWrong Sequences, maybe not even half. Some old friends of mine got me into the meetup at a time when I was studying microeconomics, and started majoring in cognitive science. So, I was enthralled by discussion, and went around the Internet and life learning about related topics. Occasionally, I read Sequences essays I haven't read before, and I realize I get the gist halfway through reading it. That's my "yeah, I know about this...". It works for me epistemically. It might have helped that I tried to rationalize the existence of the Christian God as a child, up to the point of deism not specific to any religion, and finally to virtual atheism. I found by the time I encountered arguments for or against the existence of God in theology or philosophy in university, I wasn't phased by any of them because I'd generated all of them on my own before. That's another "yeah, I know about this" set of experiences, rather than a series of "Aha!'s" I expected. These mental exercises may have prepared me for future thinking on LessWrong. Sometimes I'm not as curious as I used to be, and I don't often automatically think of ways to optimize. Instrumentally, I don't believe I'm "on the right path" for fulfilling my own goals. However, that is confounded by other factors of my own life I'm not willing to discuss publicly. So, I'm unsure how instrumentally rational I may or may not be.

I have (what I presume to be) massive social anxiety. I live near lots of communities of interest that probably contain lots of people I would like to meet and spend time with, but the psychological "activation energy" required to go to social events and not leave halfway though is huge, and so I usually end up just staying at home. I would prefer to be out meeting people and doing things, but when I actually try to do this, I get overcome by anxiety (or something resembling it), and I need to leave. Has anyone else had this problem, and if so, w... (read more)

In my personal experience, what I thought was anxiety largely went away when I was treated for depression. So I'm just gonna recommend what Scott has to say on that matter: http://slatestarcodex.com/2014/06/16/things-that-sometimes-help-if-youre-depressed/
Thank you! Based on the test Scott linked and my own subjective experience, it seems very unlikely that I am depressed. Which aspects of your treatment helped with what you thought was anxiety?
Well, I suspect the drugs (SSRIs) helped. So did being reminded that I actually had a lot more control over my situation than I alieved I did, and doing something about it (namely, changing jobs). Thing is, the problem I went in with was "I can't sleep, I'm nervous too damn much, and I'm doing terribly at work." Not "I can't get out of bed, nothing is fun, I'm thinking of killing myself, and heroin sounds like a smashingly great idea" — the sorts of things I associated with the label "depression". And I certainly didn't go in with "Doctor, I need to be more comfortable in social situations from parties to random crowds than I ever have before in my life." But that ended up happening anyway, which is pretty interesting.
Do you do any sports? Martial arts classes for example gives you an environment where you face your anxiety head on.
I can offer at least two point of view. The first is that what I thought was massive social anxiety was actually just social inexperience, that is a large part of my anxiety derived from not knowing what was the accepted social protocol in a given situation. Usually sitting quietly and observing what others did helped. The second is that you need to subdivide and identify which steps of social interactions you are able to do and which you aren't. For example, instead of just throwing yourself into a social gathering, you can (for example) get ready and go out from your house, but not get in front of the place. Or you can get in front of the place but not enter. Or you can enter but you have a sense of urgency that prompts you to leave immediately after, etc. Instead of "just practice" the whole interactions, identify the smallest next step that you can practice, and if you can't practice that step, subdivide into even smaller units (e.g. literally just doing the next step).
I recommend reading section 19 (on the management of social anxiety disorder) in the recent treatment guidelines from the British Association for Psychopharmacology (pp. 17–19). A sample: From a patient perspective, the guidelines suggest that each of the following four approaches should be similarly effective for the treatment of social anxiety as long as the care provider is adequately trained and up-to-date with current best practice: * Pharmacotherapy * given by a psychiatrist. * given by a primary care physician. * Psychotherapy * with a therapist. * in a group setting.

Hello! I'm working on a couple of papers that may be published soon. Before this happens, I'd be extremely curious to know what people think about them -- in particular, what people think about my critique of Bostrom's definition of "existential risks." A very short write-up of the ideas can be found at the link below. (If posting links is in any way discouraged here, I'll take it down right away. Still trying to figure out what the norms of conversation are in this forum!)

A few key ideas are: Bostrom's definition is problematic for two reasons: ... (read more)

This is a nice paper, and is probably the sort of thing philosphers can really sink their teeth into. One thing I really wanted was some addressing of the basic "something that would cause much of what we value about the universe to be lost" definition of 'catastrophic', which you could probably even find Bostrom endorsing somewhere.

Who chooses the Featured Articles of the week?

The homepage is controlled from the wiki here; it includes the template Lesswrong:FeaturedArticles that google tells me is here. From the history, the editor of three years tenure has wiki username Costanza and is probably the same as the LW user of the same name.
This is just a guesstimate, not an informed answer. The Featured Articles of the week seem topical to what's happening that week, such at recent events, a national date, or new developments in some organization. I'm guessing it's an administrator who pays attention to such things closely, so maybe lukeprog. That's just the availability heuristic at work, though. It could be an administrator who doesn't post very often, but still follows events closely.

How long do the effects of caffeine tolerance, where when you're not on caffeine you're below baseline and caffeine just brings you back to normal, last? If I took tolerance breaks inbetween stretches of caffeine use, could I be better off on average than if I simply avoided it entirely?

I think you are thinking about this the wrong way. People become caffeine tolerant quickly, but tolerance goes away pretty quickly too. You would get more benefit out of the opposite approach - spending most of your time without caffeine, but drinking a cup of coffee rarely, when you really need it. You would effectively be caffeine naive most of the time, with brief breaks for caffeine use, and this never develop much of a tolerance. If it's been a long time since that first cup of coffee that you don't remember it, trust me, the effects of caffeine on a caffeine-naive brain are incredible.

I know I once read a study that says you can get back to caffeine naive in two weeks if you go cold turkey, but I can't find anything on it again for the life of me. I do remember distinctly that going cold turkey is a bad plan, as the withdrawal effects are pretty unpleasant - slowly lowering your dose is better.

On a more practical level, it is certainly possible to have relatively little caffeine, such that you aren't noticeably impaired on zero caffeine, while still having some caffeine. The average coffee drinker is far beyond this point. I would try to lower your daily dose over the ... (read more)

Yes, a cup of coffee is too much.
This is a hypothesized explanation for the acute performance-enhancing effects of caffeine that fits well with the Algernon argument, but it is not a conclusive result of the literature. For instance, the following recent review disputes that. Einöther SJL, Giesbrecht T (2013). Caffeine as an attention enhancer: reviewing existing assumptions. Psychopharmacology, 225:251–74. Abstract (emphasis mine): The authors' conclusions: Note the following conflict of interest:

Precommitting to a secret prediction which I'll reveal on April 15. MD5 hash for the prediction is 38bd807a6872f6a5622aa2b011fd8f03 .

This is advance notice that unless your prediction is a short bit of plaintext that obviously doesn't have more than a few bits' worth of scope for massaging, your use of MD5 is likely to be taken as showing that you cheated.

Valid point. Here is the SHA-1 hash: f886dee5be3192819b3cd596cd73919f5c1e0a2c .

Copy of JoshuaZ's SHA-1 hash as of 2015-01-20 18:06 GMT: f886dee5be3192819b3cd596cd73919f5c1e0a2c .

Hash copy: 38bd807a6872f6a5622aa2b011fd8f03
I just realized that editing the grammar above was an issue since it doesn't show when the edit occurred so repeating the hash here in a comment which will remain unedited: 38bd807a6872f6a5622aa2b011fd8f03 .

What makes teams more effective

It isn't the total IQ of the team, and whether they're working face to face doesn't matter.

The factors discovered were that the members make fairly equal contributions to discussions, level of emotional perceptiveness, and number of women, though part of the effect of number of women is partially explained by women tending to be emotionally perceptive.

On the one hand, I've learned to be skeptical of social science research-- and I add some extra skepticism for experiments that are simulations of the real world. In this case, ... (read more)

Here are the papers: * Woolley, Chabris, Pentland, Hashmi, & Malone, 2010 * Engel, Woolley, Jing, Chabris, & Malone, 2014 I wonder how all female groups compare to groups with just one male, and how all male groups compare to groups with just one female. It seems to me like it's harder for any one person to dominate whenever people feel the need to signal egalitarian values like a preference for gender or racial equality. I don't know anything about statistics yet, so maybe this is implausible, but I think part of the reason that diversity was an insignificant predictor was that poor theory of mind caused by (?) ingroup favoritism dominates the effect as diversity increases and it drowns out the effect of the need to signal egalitarian values, so I think it would be cool to see how the collective intelligence changes when you go from 'completely' homogenous to 'almost' homogenous in experimental groups composed of subjects from cultures that value egalitarianism highly. I would like to see this replicated by subjects from less egalitarian cultures as well, but that's hard sometimes.
My guess: People in the team need to communicate. This can be essentially achieved by two ways: 1) All team members voice their opinions openly. 2) Some team members don't voice their opinions, but other members are good at reading emotions, so the latter recognize when the former believe they know something relevant. If this model is true, we would see that equal contribution (no one is silent) or emotional perceptiveness (other people recognize when the silent person wants to say something) increase the team output.

Didn't get a response in the last thread, so I'm asking again, a bit more generally.

I've recently been diagnosed with ADHD-PI. I'm wondering how to best use that information to my advantage, and am looking for resources that might help manage this. Does anyone have anything to recommend?

In the short-term I'm trying to lower barriers for things like actually eating by preparing snacks in snaplock bags, printing out and laminating checklists to remind me of basic tasks, and finding more ways to get instant feedback on progress in as many areas as I can (for coding, this means test-driven development).

My experience of ADHD includes a tendency to become distracted by thought while moving between tasks or places. I have found that headphones with an audiobook help lock my attention down to two tracks instead of half a dozen: I'm either thinking about my task, or the words in my ear. Obviously your mileage may vary, but ADHD people develop all sorts of coping methods, so my broad advice is "experiment with lots of things to help get things done, even if other people are skeptical of their effectiveness."
Keep forgetting to say thanks for the advice. Haven't had the chance to give it a shot yet, but once I get some headphones I will.
You can get accommodations for many academic activities if you are still a student.

I've never studied any branch of ethics, maybe stumbling across something on Wikipedia now and then. Would I be out of my depth reading a metaethics textbook without having read books about the other branches of ethics? It also looks like logic must play a significant role in metaethics given its purpose, so in that regard I should say that I'm going through Lepore's Meaning and Argument right now.

You could dip a toe on Stanford Encyclopedia of Philosophy.
The best way to tell is to read the metaethics textbook and see what happens. If it turns out you need a crash course on (say) utilitarian thinking, you can always do that and then return to metaethics. What is your reason for wanting to read a metaethics textbook? I ask because the most obvious reason (I think) is "because I want to live a good life, so I want to figure out what constitutes living a good life, and for that I need a coherent system of ethics" but I'd have thought that most people thinking in those terms and inclined to read philosophy textbooks would already have looked into (at least) whatever variety of ethics they find most congenial.
Good point. I ordered it yesterday, and it's supposed to be an easy introduction, so we'll see what happens. Well it seems to me that there are so many different schools of normative ethics, that unless we're all normative moral relativists (I don't think we are), most people must be wrong about normative ethics. I've seen claims here that mainstream metaethics has it all wrong, I just found out that lukeprog's got his own metaethics sequence, and some of the things that he claims to resolve seem like they would have profound implications for normative ethics. I guess I feel like I'm saving myself time not reading about a million different theories of normative ethics (kind of like I think I'm saving myself time not reading about a million different types of psychotherapy, unless it's for some sort of test) and just learning about where the mainstream field of metaethics is, and then seeing where Eliezer and Luke differ from it, and if I agree. Is it crazy to want to have some idea of what ethical statements mean before I use them as a justification for my behavior? That you say "whatever variety of ethics they find most congenial," makes me think that you might not think it is that crazy. And I mean, I'm at least not murdering anyone right now; I have time for this. And if I don't ever take the time, then I could end up becoming the dreaded worse-than-useless. I'm also curious about FAI so I'm generally schooling myself in LW-related stuff, hence the books on logic and AI and ethics. I'm working towards others as well.
I found my own answer in the comments of the course recommendations for friendliness thread. Luke says: On normative ethics, Luke says elsewhere: From what I see, he seems to attribute a similarly low significance to most of contemporary normative ethics. Also, the Stanford Encyclopedia of Philosophy has been suggested twice, in case I do need to know anything in particular about normative ethics. I'll keep that in mind. For posterity, as far as I can tell, the most popular undergraduate text on normative ethics is Rachels' The Elements of Moral Philosophy. The 7th edition has good reviews on Amazon. Apparently the 8th edition is too new to have reviews.
Where is this 2nd attempt to explain metaethics by Eliezer?
I'm pretty new, I couldn't tell you for sure. I'm pretty sure it's two posts in that second sequence: Mixed Reference: The Great Reductionist Project and By Which It May Be Judged. I'm pretty sure the rest of the sequence at least is necessary to understand those.
I was looking at this article as a starting point. I end up at either error theory or non-cognativism. Is there value in reading further down the tree or would it be like learning more phlogiston theory (at least for me)?
Does it matter? It's not very hard to get up to speed on ethics. Either skim an introductory textbook, or spend a few hours on the Stanford Philosophy encyclopedia.
Oxford's Rhetoric could be helpful in this area.


Genealogy of the ideas contained in Taleb's work. Pretty useful. I had it embedded but it took up the entire page for me.

People are perennially interested in the reliability of hard drives. Here is useful hard data. Summary:

At Backblaze, as of December 31, 2014, we had 41,213 disk drives spinning in our data center, storing all of the data for our unlimited backup service. That is up from 27,134 at the end of 2013. ... The table below shows the annual failure rate through the year 2014.

tl;dr Avoid 3Tb Seagate Barracuda drives.

I spend time in hardware enthusiast communities and not so impressed with Backblaze. Even here, the Seagate failure rates seem suspiciously anomalous. Also, SSDs, which are probably a better match for most people here (my rig has run a 256 GB SSD for the past 2.5 years and I'm yet to want for more storage). Especially for laptops; they use less power (= your battery lasts longer) and can stand up to shock (so your laptop doesn't break if you drop it).
I did not mean to endorse any particular service or give recommendations as to which storage devices should people buy. I found hard data which is rare to come by, I shared it. If you think the data is wrong or misleading, do tell.
Consensus is that modern HDD's from reputable manufacturers have approximately equal low failure rates, especially after the first year. You should still back up important data (low != 0), but the differences failure rates in consumer space is small enough to not really sway purchasing decisions. Their methodology probably doesn't extrapolate well because they're testing the drives in what amounts to a NAS and the WD reds (which did well) are NAS drives, and therefore designed to operate 24/7 with vibration and nongreat cooling, whereas the Seagate Barracudas are just absolutely not NAS drives (unlike, say, the Seagate NAS drives). So, it's not really surprising they had a much higher failure rate, but it'd also be incorrect to conclude that you should avoid them. If I'm building a rig for work, internet use, or gaming {1}, then my HDD's going to be in a well-cooled, non-vibrating environment, and not used in use 24/7, so I'm essentially throwing away 15% price premium for the WD Red's (or 60% for the HGST Deskstar's). OTOH, if you're backing up your data locally on a NAS, pay the gorram premium. {1} Again, though, SSD's are increasingly likely the way to go. You can get a sufficiently good 256 GB SSD for about the price of a 3 GB HDD and if you're never going to use more than 250 GB (which, I'm guessing is at least 80% of people reading this who don't already know whether an SSD or HDD better meets their needs), you're essentially getting substantially better performance (up to an order of magnitude), more reliability, and less noise for free. I harp on this because SSD's come in a 2.5-inch form factor and the more the standard storage option is SSD, the more cases won't have a whole bunch of room taken up with 3.5-inch bays I don't use. More importantly, there'll finally be budget laptops that I don't have to immediately take apart, clone the OS onto an SSD, reassemble, and figure out what to do with the HDD it came with just to get a decent experience. Gah! SSD
I am sorry, the link shows hard data which disproves that statement and not in a gentle way, either. Didn't your first sentence state that all failure rates are "approximately equal"? Make up your mind. Assumption not in evidence. I've seen a LOT of computers totally taken over by dust bunnies :-) The reason you go look at that grey disk where the fan vent used to be is that your bios starts screaming at you that the machine is overheating :-D Yes, but that's irrelevant to the original post which looks at reliability of rotating-platter hard drives. If you think you don't care about the issue, well, what are you doing in this subthread?
My above comment was poorly written. Sorry. Hem. Consumer-grade HDD's, used properly, all have about same, low failure rate. If you treat your desktop like a NAS or server, they will drop like flies (as evidenced). If you treat your desktop like a desktop, then a lot of the price-raising enterprise-grade features (vibration resistance, 24/7 operation) count for zilch. They're still higher-end drives, and will last longer, but assuming you give your desktop a fraction of the maintenance you give your car (like, take 5 minutes to blow it out every other year), not a lot. Mea culpa. I'll give you heat, but vibration tolerance and 24/7 operation are enterprise-grade features with minimal relevance to desktop hard drives. Evidence. Evidence. Why I'm inclined to distrust anything Backblaze publishes + evidence. tl;dr Looking at this data and concluding "avoid Seagate Barracuda drives" is a bit like noticing that bikers survive accidents more often when they're wearing a helmet and then issuing a blanket recommendation to a population primarily of car-drivers to wear bike helmets. Sure, it'll reduce your expecting mortality when you go out for a drive, but not nearly as much as you'd expect from the biking numbers.
Sigh. No. Really, go look at the data. I am not going to take the "consensus" of the anand crowd over it. Hitachi Deskstar 7K2000 is a consumer-grade non-enterprise hard drive. In the sample of ~4,600 drives it has 1.1% annual failure rate in the NAS environment. Seagate Barracuda 7200.14 is a consumer-grade non-enterprise hard drive. In the sample of ~1,200 drives it has 43.1% annual failure rate in the NAS environment. Those are VERY VERY DIFFERENT failure rates. I, for example, have five-drive zfs array at home which is on 24/7. I am very much interested in what kind of drives will give me a 1% failure rates and which kind of drives will give me 43% failure rates. I am not average, but I hardly think I'm unique in that respect in the LW crowd.
Do we actually disagree about anything? We certainly agree that the Barracuda's are crap in NAS's. I believe that WD Red's are a major improvement and Hitachi Deskstars a further improvement, which is just reading the Backblaze data (which is eminently applicable to NAS environments), so I'm we're in complete agreement that, for NAS's, Barracuda << Red < 7K2000. However, I also contend that, in a desktop PC, a lot of what makes the Reds and 7K2000 more reliable (e.g. superior vibration resistance) will count for very little, so they'll still fail less often, just not 1/40th as much. Even if they're four times as reliable, moving from, say, a 4% annual failure rate vs a 1% annual failure rate may not be worth the price premium (using Newegg pricing, the Hitachi drive costs 72.5% more, but on Amazon, the Hitachi drive is cheaper. Yay Hitachi?), especially since RAID 1 is a thing (which would give us a 0.16% annual failure rate at a 100% price premium). Obviously, if you can find higher-quality drives for less than lower-quality drives, use those. But, in what we'd naively expect to be the normal case, if you're paying for features that drastically reduce failure rates in NAS environments, but using your drives in a desktop environment where these features are doing little to extend your drive life, then you're probably better off using RAID 1. (Why do I use low single-digit annual failure rates? Because I remember Linus of Linus Tech Tips, who worked as a product manager at NCIX and therefore is privy to RMA and warranty rates, implied that's about right. He produces a metric shit-ton of content, though, so there's no way I'm going to dig it up.) I'm also interested why you're dismissive of AnandTech. I currently believe they're gold standard of tech reviews, but if they're not as reputable as I believe they are, I would very much like to stop believing they are.
Yes. You keep saying that there are no significant differences in reliability between hard drives of similar class (consumer or enterprise, basically) in similar conditions. I keep saying there are. I don't follow the hardware scene much nowadays, but I don't think AnandTech was ever considered the "gold standard" except maybe by AnandTech itself. It's a commercial website, not horrible, but not outstanding either. Garden-variety hardware reviews, more or less. In any case, I trust discussion on the forums much more than I trust official reviews (recall the Sturgeon's Law).
I've found that modern hard drives tend to be quite reliable for consumer purposes; we've come a long way since the bad old days of the Click of Doom. Their enclosures, not so much. I've had three backplanes for external hard drives, from three different manufacturers, fail in as many years. And one cable. But that table won't give you any information on how common this sort of thing is or how to mitigate your risk.
Heh. I'd say the reverse: modern hard drive are not reliable enough for consumer purposes since consumers typically don't make backups and a failed hard drive is a disaster. They are sufficiently reliable for professional purposes where when a drive fails you just swap in another one and continue as before. Yeah, these are usually cheaply made. But then if an enclosure fails you just get another one and no data is lost or needs to be recovered from backups.
Unless the manufacturer in their infinite wisdom has enabled hardware encryption with the keys stored in the backplane.
Ah. Well... -- Doctor, it hurts when I do this. -- Don't do this, then.
The trouble is that it's the manufacturer that does it, and the user who gets hurt.

I'm looking at setting up my own website, both for the experience and to allow hosting of some files for a game I'm making. What I'd like is to register a domain, probably (myrealname).com and/or .ca, both of which are available, set up a wiki on it, and host a few(reasonably large) files. Thing is, I have a computer that stays on 24/7, and I'm generally competent with computers, so I suspect I can probably get by without paying for hosting, which appeals to me.

Can anyone link me to guides on how to do this? My Googling is turning up shockingly little, just "Pay someone for hosting!". I've registered domains before, but never done any hosting.

The two relevant questions here are: * What's your ISP's upload speed and stated policy towards home servers? A lot of ISPs prohibit servers for residential customers, though actual enforcement is rare. * Are you sure you're up to the task of handling security for your home server that will be exposed to the 'net?
You're right, it's prohibited. That doesn't concern me too much. Frankly, no, I'm not sure at all. Good point :/ Follow-up question: What sort of domain/hosting sites can give me, say, a gig of storage and a few gigs a month of bandwidth for a low price?
You can run a small server on EC2 for free for a year. After that there will be cheaper options, but not necessarily cheaper enough for you to care. http://aws.amazon.com/ec2/pricing/
You'll need to configure and run a web server on your computer. The most commonly used, publicly documented, free and accessible to people just trying stuff out is LAMP). You'll then need to point your domain at the IP address of your server. What kind of hardware are we talking about? How much traffic are you looking at supporting? What kind of internet connection do you have at home? Are you familiar with the concept of mathematical multiplication?
Regular home PC, fairly dated at this point. Not much traffic is intended, though - it'll have a fairly quiet home page for my job(I'm not allowed to have more, for tedious reasons of legal compliance in advertising), and a hidden wiki that'll be seen by maybe a dozen friends. It's a toy site, not anything serious. Re mathematical multiplication, I assume you don't mean 3x4=12. Is this some sort of traffic collision issue?
As it happens, I do. Depending on what you're planning on hosting, even trying to serve "a few reasonably large files" may be unreasonably slow on a home internet connection. Divide your upload speed by the number of concurrent users you expect - that's the theoretically maximal download speeed they can expect from your site.
Ah, fair. I have 10 Mbps nominal upload, and the files in question are a few hundred megs(so too big to pass around by things like email, but not large by the standards of the modern world). I'm not terribly worried about upload speed, if it takes five minutes.
Acquiring hosting is straightforward. Pick a company with a good reputation, a reasonable price, and all the features you need, sign up, and pay. (I can't be of much help here, as I've used the same hosting company since 2004 or so, and I'm not sure if I could get a better deal elsewhere.) The remainder is more specific, and that might be why you are having trouble finding tuturials. E.g., uploading and setting up a wiki could mean you read tutorials on SSH or FTP, tutorials on file permissions, and/or tutorials on the wiki-specific details of setting up a wiki. All of this depends on your experience level. When I started out, I knew none of this, and I basically figured it out as I went along.
Start by paying someone for hosting. That's enough to learn about. Maybe start by paying Amazon nothing for a year of EC2 hosting. Once you understand how to host a website, you can migrate it to your home computer, where you will run into additional difficulties, like installing a base webserver and automatically updating your DNS. But probably you should stick with paid hosting. For static files, Amazon S3 is extremely cheap. For a full-fledged webserver to install your wiki, Nearly Free Speech will do, and is probably cheaper than Amazon, especially at your usage level.
Why exactly would you like to avoid paying someone for hosting? It seems like a good candidate for a service to be outsourced.
I enjoy developing skills in assorted fields, and my finances are tighter than I'd like at the moment.

I've got a problem. My sleep schedule is FUCKED UP.

Yesterday, I went to bed at around 8:00 AM and got up at 10:00 PM. I don't normally sleep 14 hours, but I've somehow become nocturnal; sleeping from 7 AM until up at 5 PM isn't particularly unusual for me. I'm not actually sleep deprived, but always sleeping through "normal business hours" tends to cause me problems - I can't get to the bank even when it's important - and isn't very convenient for my girlfriend either. My father jokes that I must be turning into a vampire because I'm never awake ... (read more)

Some kind of polyphasic sleep? E.g. from 9 PM to 1 AM (4 hours) and then from 4 AM to 8 AM (4 hours).
You could get your dad to wake you up at 1 pm every day if he's around For me, having a person wake me up is way more effective. Alternately, just do it the hard way and stay up for 30 hrs.
It's hard to see what scope there is for the problem to get all that much better if you are required to be awake from 1am to 3am (or later) every day. It seems like the best you can do is to try to establish a routine of always going straight to bed (and not reading, browsing the internet, etc., once there) after dealing with your mother, which might maybe get you a ~ 4am-12pm sleeping time on typical days.
That's actually a lot better than what I've been doing recently. :(
What happens if you try to go to bed just ten minutes earlier each day than you did the day before, using an alarm clock? What about ten minutes later?

Anyone have a source for a summary of full life extension testing/supplementation regime?


I've let things slide for a while, and want to get back on track with a full regime, including hormones and pharmaceuticals. I'm thinking cardiovascular, blood sugar, hormone, and neuroprotection.


I would not recommend hormones. Beware of Algernon's law - if a simple biochemical tweak were always helpful, it'd probably already be that way. In particular a lot of things that try to work against 'aging' as opposed to specific dysfunctions will probably cause cancer. Thiel is a particular offender there, he recently started taking HGH with the justification "we'll probably have cancer licked in a decade or two". I read that statement to some people in my lab, where it provoked universal laughter.

The key question: helpful, for what? There's no reason to think Evolution has optimized my machinery for longevity. As for giggles, I'll bet on Kurzweil's predictions over the people in your lab.
The longer you live, and especially the longer you remain healthy, the more evolutionarily fit you are. At least insofar as it doesn't funge against other traits evolution cares even more about, there's every reason to think evolution has optimized your machinery for longevity. We might find the non-helpful side of the tradeoff to actually be beneficial to modern human eyes even if they're horrible evolutionarily, like an ideal one would be "doubles your lifespan, makes you infertile", but there won't be anything evolution would see as a free lunch or a good tradeoff. CellBioGuy attributed the "cancer licked in a decade or two" prediction to Thiel, not Kurzweil, do you actually have a source for it from Kurzweil? And does he have any particular reasons for stating it? Because even as someone on board with the singularity thing, that sounds like an insane pipe dream.
A couple of remarks to expand on RowanE's points for anyone who may be skeptical that evolution cares at all about longevity past (something like) typical childrearing age: * Men (but not women) can continue to father children pretty much as long as they live. * Children may receive some support from extended families; the longer (e.g.) grandparents remain alive and healthy, the better for them (hence for their genes, which overlap a lot with the grandparents'). * I bet most things that make you more likely still to be alive at 80 also make you likely to be healthier (hence more useful to your children) at 30.
This is all true. However buyandbuydavis has a point. Evolution optimizes for offspring and longevity is only selected for as a means to that end. When you selectively breed and mutate fruit flies and nematodes for lifespan over hundreds of generations you can double or triple them, universally at the expense of total offspring. Granted mammals are much more k selected, putting lots of effort into a few offspring, than those r selected species that throw hundreds or even thousands of eggs to the wind per generation so lifespan does matter at least some to us and we probably already lie somewhere along that evolutionary axis away from the flies. But you can still see how there might be some tension between the two optimizations and we're certainly not perfectly optimized for longevity. That doesnt change my assessment that within any given existing evolved tuned organism a lot of the evidence ive seen suggests that mucking with hormone levels exogenously (as opposed to endogenously through general health activity diet etc) to try to keep energy or cell division or whatever up in the absence of an existing pathology of that hormone system will probably increase cancer rates. Theres actually a promising line of research on a substance being developed by one of the scientific grandaddies of my current metabolism research that appears to be broadly neuroprotective via messing with regulation of aerobic respiration, something that also goes weird in muscles with age. I greatly look forward to seeing if it increases tumor rates too [there are biochemical mechanistic reasons i think it might] or if that particular dysregulation is something you can attack without nasty side effects (though i gotta say i would take a raised cancer risk to hold off alzheimers or parkinsonism or traumatic brain injury any day).

Remember the 80/20 rule. Don't over-optimize; it could be expensive and dangerous.

At least get your diet in line before you worry too much about pharmaceuticals.

Start with medical tests for cholesterol, blood pressure, vitamin D, magnesium, diabetes and anything else your doctor recommends based on your age, family and disease history.

I don't know if this is what you meant by "summary", but Kurzweil's book (co-written with some homeopath) Transcend (Amazon) is his most up-to-date effort. I've read it mostly and it seems well researched and also explains the science behind its recommendations. I also thought I'd mention that there are now certain compounds (Wikipedia) which show some evidence of initiating telomerase production in adult humans. If this drug works as well in humans as it has been shown to work in mice, it should significantly increase your healthspan.

(Warning: politics)

Posting a few links to relevant followups to the "Comment 171" situation and the related sexual harassment scandal and MIT's reaction which prompted that discussion. I'm posting these because the issue has come up in the last few weeks of open threads.

This piece seems like an excellent example of reading others as charitably as possible and essentially steelmanning every argument involved. It also gives a pretty good summary of the entire situation with relevant links.

Also, one of the women involved in the original sexual ha... (read more)

I am taking a graduate course called "Vision Systems". This course "presents an introduction to the physiology, psychophysics, and computational aspects of vision". The professor teaching the course recommended that those of us that have not taken at least an undergraduate course in perception get an introductory book on the subject. The one he recommends, which is also the one he uses for his undergraduate course, is this: http://www.amazon.com/Sensation-Perception-Looseleaf-Third-Edition/dp/0878938761 Unfortunately, this book goes for... (read more)


Scott Alexander, alias Yvain, conducted a companion survey for the readership of his blog, Slate Star Codex, to parallel and contrast with the survey of the LessWrong community. The issue I ponder below will likely come to light when the results from that survey are published. However, I'm too curious thinking about this to wait, even if present speculation is later rendered futile.

Slate Star Codex is among my favorite websites, let alone blogs. I spend more time reading it than I do on LessWrong, and it may only be second to Wikipedia or Facebook for webs... (read more)

A number of SSC posts have gone viral on Reddit or elsewhere. I'm sure he's picked up a fair number of readers from the greater internet. Also, for what it's worth, I've turned two of my friends on to SSC who were never much interested in LW. But I'll second it being among my favourite websites.
Similarly, I've had several non LW friends who have started reading SSC after semi-frequently being linked there by my FB.
SSC seems to have a pretty wide fanbase on Tumblr. I'm sure he's picked up a very large non-LW fanbase over the years; he's been blogging forever.

More on Slate Star Codex than on LessWrong, there is discussion of memes as a useful concept for explaining or thinking about cultural evolution. The term 'memetics' is thrown around to correspond to the theory of memes as a field of inquiry. I want to know more about memetics, lest I would consider it not worth my time to think about it more deeply. More broadly, if not definitely a pseudoscience, it skirts that border more frequently. I expect the discourse on memes might be at least a bit less speculative if us amateur memeticists here knew more about i... (read more)

You could post this as a top level discussion post here, if you want to make it more available and reduce trivial inconveniences to those without access to facebook.

I've been thinking about (and writing out my thoughts on) the real meaning of entropy in physics and how it relates to physical models. It should be obvious that entropy(physical system) isn't well-defined; only entropy(physical model, physical system) is defined. Here, 'physical model' might refer to something like the kinetic theory of gases, and 'physical system' would refer to, say, some volume of gas or a cup of tea. It's interesting to think about entropy from this perspective because it becomes related to the subjectivist interpretation of probability. I want to know if anyone knows of any links to similar ideas and thoughts.

There are approximations in figuring entropy and thermal statistics that may be wrong in very nearly immeasurable ways. The one that used to stick in my head was the calculation of the probability of all the gas in a volume showing up briefly in one-half the volume. Without doing math I figured it is actually much less than the classic calculated result, because the classic result assumes zero correlation between where any two molecules are, and once any kind of significant density difference exists between the two sides of the volume this will break. But entropy is still real in the sense that it is "out there." An entire civilization is powered (and cooled) by thermodynamic engines, engines which quite predictable provide useful functionalities in ways predictable in detail from calculations of entropy. A glass of hot water burns your skin even if you know the water and the skin's precise characterization in parameter space before they come in contact. Fast moving (relative to the skin) molecules of water break the bonds of some bits of skin they come in contact with. On the micro scale it may look like a scene from the matrix with a lot of slow moving machine gun bullets. The details of the destruction may be quite beautiful and "feel" cold, but essentially thanks to the central limit theorem, a whole lot of what happens will be predictable in a quite useful, and quite unavoidable way without having to appeal to the detail. I think the only sense in which you can extract energy from water with a specially built machine that is custom designed for the current parameter space of the water, it is the machine which is at 0 or at least low temperature. And so the fact that useful energy can be extracted from the interaction of finite temperature water and a cold machine is totally consistent with entropy being real, thermal differences can power machines. And they do, witness the cars, trucks, airplanes and electric grid that are essential for our economy. The good
I think you're getting several things wrong here. The assumption of zero correlation is valid for ideal gases. It will not break if there is a density difference. We're talking about statistical correlation here. "Entropy is in the mind" doesn't mean that you need consciousness for entropy to exist. All you need is a model of the world. Part of Jaynes' argument is that even though probabilities are subjective, entropy emerges as an objective value for a system (provided the model is given), since any rational Bayesian intelligence will arrive at the same value, given the same physical model and same information about the system.
Statistical independence means the chance that a molecule is at a particular spot depends not at all on where the other molecules are. Certainly if the molecules never hit each other, they only bounce off the walls of the volume, then this would be true as the molecules don't interract with each other so their probability of being one place or another is not changed by putting the other molecules anywhere, as long as they don't interract. But molecules in a gas do interact they bounce off each other. Even an ideal gas. There is an average distance they travel before bouncing off another molecule called a mean free path. A situation where the mean free path is << size of volume is typical at STP. Does this interaction break non-correlation? My intuition is that it does. But the thing I know for sure is that the only derivation I have ever seen for calculating the probability that all the gas is in 1/2 the volume was done with the assumptions of zero correlations, which we only know is the case for zero interaction, which is NOT an assumption required in the ideal gas models. And is certainly not true of any real gases. This is as true for Entropy as it is for Energy. By this standard, Entropy and Energy are both in the mind, neither one is "realer" than the other.
Entropy is in the mind in exactly the same sense that probability is in the mind. See the relevant Sequence post if you don't know what that means. The usual ideal gas model is that collisions are perfectly elastic, so even if you do factor in collisions they don't actually change anything. Interactions such as van der Waals have been factored in. The ideal gas approximation should be quite close to the actual value for gases like Helium.
Without a link! So I went to the sequences page in the wiki and the word entropy doesn't even appear on the page! Good job referring me there without a link. Okay... Is that the same sense in which Energy is in the mind? Considering that this seems to be my claim that you are responding to, AND there is no reasonable way to get to a sequence page that corresponds to your not-quite-on-topic-but-not-quite-orthogonal response, that would be awfully nice to know. Are you agreeing with me and amplifying, or disagreeing with me and explaining?
Probability is in the Mind.
THank you. The thing that leaps out at me is that the rhetorical equation in that article between the sexiness of a woman being in the mind and the probability of two male children being in the mind is bogus. I look at a woman and think she is sexy. If I assume the sexiness is in the woman, and that an alien creature would think she is sexy, or my wife would think she is sexy, because they would see the sexiness in her, then the article claims I have been guilty of the mind projection fallacy because the woman's sexiness is in my mind, not in the woman. The article then proceeds to enumerate a few situations in which I am given incomplete information about reality and each different scenario corresponds to a different estimate that a person has two boy children. BUT... it seems to me, and I would love to know if Eliezer himself would agree, even an alien given the same partial information would, if it were rational and intelligent, reach the same conclusions about the probabilities involved! So... probability, even Bayesian probability based on uncertainty is no more or less in my head than is 1+1=2. 1+1=2 whether I am an Alien mind or a Human mind, unlike that woman is sexy which may only be true in heterosexual male, homosexual female, and bisexual human minds, but not Alien minds. But be that as it may, your comment still ignores the entire discussion, which is is Entropy and more or less "real" than Energy? The fact is that Aliens who had steam engines, internal combustion engines, gas turbines, and air conditioners would almost certainly have thermodynamics, and understand entropy, and agree with Humans on the laws of thermodynamics and the trajectories of entropy in the various machines. If Bayesian probability is in the mind, and Entropy is in the mind, then they are like 1+1=2 being in the mind, things which would be in the mind of anything which we considered rational or intelligent. They would NOT be like "sexiness."
Probability depends on state of knowledge, which is a fact about your mind. Another agent with the same state of knowledge will assign the same probabilities. Another agent fully aware of your state of knowledge will be able to say what probabilities you should be assigning. Sexiness depends on sexual preferences, which are a fact about your mind. Another agent with the same sexual preferences will assess sexiness the same way. Another agent fully aware of your sexual preferences will be able to say how sexy you will find someone. I don't see that there's a big difference here. Except maybe for the fact that "states of knowledge", unlike "sexual preferences", can (in principle) be ranked: it's just plain better for your state of knowledge to be more accurate.
Well yes. Of course everything you can say about probability and sexiness you can say about Energy, Entropy, and Apple. That is, the estimate of the energy or entropy relationships in a particular machine or experimental scenario depend on the equations for energy and entropy, the measurements you make on the system to find the values of the elements that go into those equations. Any mind with the same information will reach the same conclusions about the Energy and Entropy that you would, assuming you are all doing it "right." Any intelligence desiring to transform heat producing processes into mechanical or electrical energy will even discover the same relationships to calculate energy and entropy as any other intelligence and will build similar machines, machines that would not be too hard for technologists from the other civilization to understand. Even determining if something is an apple. Any set of intelligences that know the definitions of apples common among humans on earth will be able to look at various earth objects and determine which of them are apples, which are not, and which are borderline. (I'm imagining there must be some "crabapples" that are marginally edible that people would argue over whether to call apples or not, as well as a hybrid between an apple and a pear that some would call an apple and some wouldn't). So "Apple" "Sexy" "Entropy" "Energy" and "Probability" are all EQUALLY in the mind of the intelligence dealing with them. If you check, you will see this discussion started by suggesting that Energy was "realer" than Entropy. That Entropy was more like Probability and Sexiness, and thus, not as real, while Energy was somehow actually "out there" and therefore realer. My contention is that all these terms are equally as much in the mind as in reality, that as you say any intelligence who knows the definitions will come up with the same conclusions about any given real situation, and that there is no distinction in "realness" between
Anything at all is "in the mind" in the sense that different people might for whatever reason choose to define the words differently. Because this applies to everything, it's not terribly interesting and usually we don't bother to state it. "Apple" and "energy" are "in the mind" in this sense. But (in principle) someone could give you a definition of "energy" that makes no reference to your opinions or feelings or health or anything else about you, and be confident that you or anyone else could use that definition to evaluate the "energy" of a wide variety of systems and all converge on the same answer as your knowledge and skill grows. "Entropy" (in the "log of number of possibilities" sense) and "probability" are "in the mind" in another, stronger sense. A good, universally applicable definition of "probability" needs to take into account what the person whose probability it is already knows. Of course one can define "probability, given everything there is to know about mwengler's background information on such-and-such an occasion" and everyone will (in principle) agree about that, but it's an interesting figure primarily for mwengler on that occasion and not really for anyone else. (Unlike the situation for "energy".) And presumably it's true that for all (reasonable) agents, as their knowledge and skill grow, they will converge on the same probability-relative-to-that-knowledge for any given proposition -- but frequently that won't in any useful sense be "the probability that it's true", it'll be either 0 or 1 depending on whether the proposition turns out to be true or false. For propositions about the future (assuming that we fix when the probability is evaluated) is might end up being something neither 0 nor 1 for quantum-mechanical reasons, but that's a special case. Similarly, entropy in the "log of number of possibilities" sense is meaningful only for an agent with given knowledge. (There is probably a reasonably respectable way of saying "relative to
Aha! So it would seem the original sense that "Energy" is "realer" (more like Apple) than Entropy is because Entropy is associated with Probability, and Bayesian Probability, the local favorite, is more in the mind than other things because its accurate estimation requires information about the state of knowledge of the person estimating it. So it is proposed there is a spectrum "in the mind" (or dependent on other things in the mind as well as things in the real world) to "real" (or in the mind only to the extent that it depends on definitions all minds would tend to share). We have Sexiness is in the mind, and thinking it is in reality is a projection fallacy. At the other end of the spectrum, we have things like Energy and Apple which are barely in the mind, which depend in straightforward ways on straightforward observations of reality, and would be agreed upon by all minds that agreed on the definitions. And then we have probability. Frequentist definitions of probability are intended to be like Energy and Apple, relatively straightforward to calculate from easy to define observations. But then we have Bayesian probability, which is a statement which links our current knowledge of various details with our estimate of probability. So considering that different minds can have different bits of other knowledge in them than other minds, different minds can "correctly" estimate different probabilities for the same occurrences, just as different minds can estimate different amounts of sexiness for the same creatures, depending on the species and genders of the different minds. And then we have Entropy. And somebody defines Entropy as the "log of number of possibilities" and possibilities are like probabilities, and we prefer Bayesian "in the mind" probability to Frequentist "in reality" definitions of probability. And so some people think Entropy might be in the mind like Bayesian probability and sexiness, rather than in reality like Energy and Apple. Good summ
That is one definition. It is not the only viable way to define entropy. (As you clearly know.) The recent LW post on entropy that (unless I'm confused) gives the background for this discussion defines it differently, and gives the author's reasons for preferring that definition. (I am, I take it like you, not convinced that the author's reasons are cogent enough to justify the claim that the probabilistic definition of entropy is the only right one and that the thermodynamic definition is wrong. If I have given a different impression, then I have screwed up and I'm sorry.) "Log of #possibilities" doesn't have any probabilities in it, but only because it's a deliberate simplification, targetting the case where all the probabilities are roughly equal (which turns out not to be a bad approximation because there are theorems that say most states have roughly equal probability and you don't go far wrong by pretending those are the only ones and they're all equiprobable). The actual definition, of course, is the "- sum of p log p" one, which does have probabilities in it. So, the central question at issue -- I think -- is whether it is an error to apply the "- sum of p log p" definition of entropy when the probabilities you're working with are of the Bayesian rather than the frequentist sort; that is, when rather than naively counting states and treating them all as equiprobable you adjust according to whatever knowledge you have about the system. Well, of course you can always (in principle) do the calculation; the questions are (1) is the quantity you compute in this way of any physical relevance? and (2) is it appropriate to call it "entropy"? Now, for sure your state of knowledge of a system doesn't affect the behaviour of a heat engine constructed without the benefit of that knowledge. If you want to predict its behaviour, then (this is a handwavy way of speaking, but I like it) the background knowledge you need to apply when computing probabilities is what's "k
OK this is in fact interesting. In an important sense you have already won, or I have learned something, whichever description you find less objectionable. I still think that the real definition of entropy is as you originally said, the log of the number of allowable states, where allowable means "at the same total energy as the starting state has." To the extent entropy is then used to calculate the dynamics of a system, this unambiguous definition will apply when the system moves smoothly and slowly from one thermal equilibrium to another, as some macrosopic component of the system changes "slowly," slowly enough that all intermediate steps look like thermal equilibria, also known in the trade as "reversibly." But your "10 ms after the partition removed" statement highlights that the kinds of dynamics you are thinking of are not reversible, not the dynamics of systems in thermal equilibrium. Soon after the partition is removed, you have a region that used to be vacuum that has only fast moving molecules in it, the slow moving ones from the distribution haven't had time to get there yet! Soon after that when the fast molecules are first reaching the far wall, you have some interesting mixing going on involving fast molecules bouncing off the wall and hitting slower molecules still heading towards the wall. And in a frame by frame sense, and so on and so on. Eventually (seconds? Less?) zillions (that's a technical term) of collisions have occurred and the distributions of molecular speeds in any small region of the large volume is a thermal distribution, at a lower temperature than the original distribution before the partition was removed (gasses cool on expansion). But the details of how the system got to this new equilibrium are lost. The system has thermalized, come to a new thermal equilibrium. I would still maintain that formally, the log of the number of states is a fine definition, that the entropy thus defined is as unambiguous as "Energy," and that it
They don't change ANYTHING? Suppose I start with a gas of molecules all moving at the same speed but in different directions, and they have elastic collisions off the walls of the volume. If they do not collide with each other, they never "thermalize," their speeds stay the same forever as they bounce off the walls but not off each other. But if they do bounce off each other, the velocity distribution does become thermalized by their collisions, even when these collisions are elastic. So collisions don't chage ANYTHING? They change the distribution of velocities to a thermal one, which seems to me to be something. So even if an ideal gas maintained perfect decorrelation between molecule positions in an ideal gas with collisions, which I do not think you can demonstrate (and appealing to an unlinked sequence does not count as a demonstration), you would still have to face the fact that an actual gas like Helium would be "quite close" to uncorrelated, which is another way of saying... correlated.
Both the "entropy is in the mind" and "entropy is real" explanations seem plausible to me (well, I am not a physicist, so anything may seem plausible), so now that I think about it... maybe the problem is that even if we would be able to know a lot of stuff, we might still be limited in ways we can use this knowledge. And the knowledge you can't realistically use, it's as if you wouldn't even have it. So, in theory, there could be a microscopical demon able to travel between molecules of boiling water without hitting any of them -- so from the demon's point of view, there is nothing hot about that water -- the problem is that we cannot do this with real stuff; not even with nanomachines probably. Calculating the path for the nanomachine would be computationally too expensive, and it is probably too big to fit between the molecules. So the fact is that a few molecules are going to hit that nanomachine, or any greater object, anyway. Or perhaps we could avoid the whole paradox by saying: "Actually no, you cannot have the knowledge about all molecules of the boiling water. How specifically would you get it, and how specifically would you keep it up to date?"
This is pretty much it, and it's a really subtle detail that causes a lot of confusion. This is why the real problem with Maxwell's demon isn't how you obtain the information, it's how you store the information, as Landauer showed. To extract useful work you have to erase bits ('forget' knowledge) at some point. And this raises the entropy.
I made a post about this a month or so ago. Yay!
That's pretty much exactly what I had in mind. Thanks.
If you haven't already read Jaynes derivation of maxent, and the further derivation of much of statistical mechanics from those principles, that would be a good place to start.
In this way entropy is not much different from energy. The latter also depends on the model as much as on the physical system itself.
I'm going to disagree with you here. Not that energy doesn't depend on our models. It just depends on them in a very different way. The entropy of a physical system is the Shannon entropy of its distribution of 'microstates'. But there is no distribution of microstates 'out there'. It's a construction that purely exists in our models. Whereas energy does exist 'out there'. It's true that no absolute value can be given for energy and that it's relative, but in a way energy is far more 'real' than entropy.
Potential energy depends on what you set the zero level to, but I agree that this is very different than entropy. In particular, the difference in energy between two systems is well-defined.
"Out there" are fields, particles, interacting, moving, bumping into each other, turning into each other. Energy is a convenient description of some part of this process in many models. Just like with Jaynes' entropy, knowing more about the system changes its energy. For example, just like knowing about isotopes affects the calculated entropy of a mixed system, knowing about nuclear forces changes the calculated potential energy of the system.
I agree with passive_fist, and my argument hasn't changed since last time. If we learn that energy changes in some process, then we are wrong about the laws that the system is obeying. If we learn that entropy goes down, then we can still be right about the physical laws, as Jaynes shows. Another way: if we know the laws, then energy is a function of the individual microstate and nothing else, while entropy is a function of our probability distribution over the microstates and nothing else.
I agree that it feels different. It certainly does to me. Energy feels real, while entropy feels like an abstraction. A rock falling on one's head is a clear manifestation of its potential (turned kinetic) energy, while getting burned by a hot beverage does not feel like a manifestation of the entropy increase. it feels like the beverage's temperature is to blame. On the other hand, if we knew precisely the state of every water molecule in the cup, would we still get burned? The answer is not at all obvious to me. Passive_fist claims that the cup would appear to be a absolute zero then: I do not know enough stat mech to assess this claim, but it seems wrong to me, unless the claim is that we cannot know the state of the system unless it's already at absolute zero to begin with. I suppose a toy model with only a few particles present might shed some light on the issue. Or a link to where the issue is discussed.
An easy toy system is a collection of perfect billiard balls on a perfect pool table, that is, one without rolling friction and where all collisions conserve energy. For a few billiard balls it would be quite easy to extract all of their energy as work if you know their initial positions and velocities. There are plenty of ways to do it, and it's fun to think of them. This means they are at 0 temperature. If you don't know the microstate, but you do know the sum of the square of their velocities, which is a constant in all collisions, you can still tell some things about the process. For instance, you can predict the average number of collisions with one wall and the corresponding energy, related to the pressure. If you stick your hand on the table for five seconds, what is the chance you get hit by a ball moving faster than some value that will cause pain? All these things are probabilistic. In the limit of tiny billiard balls compared to pool table size, this is the ideal gas.
If you know precisely the state of every water molecule in the system, there's no need for your finger to get burned. Just touch your finger to the cup whenever a slow-moving molecule is approaching, and remove it whenever a fast-moving molecule is approaching (Maxwell's demon).
Right, supposing you can have a macroscopic Maxwell's demon. So the claim is not that it is necessarily at absolute zero, but that it does not have a well-defined temperature, because you can choose it to behave (with respect to your finger) as if it were at any temperature you like. Is this what you are saying?
Well, no. Temperature is the thermodynamic quantity that is shared by systems in equilibrium. "Cup of tea + information about all the molecules in the cup of tea" is in thermodynamic equilibrium with "Ice cube + kinetic energy (e.g. electricity)", in that you can arrange a system where the two are in contact but do not exchange any net energy. Note that it is NOT in thermodynamic equilibrium with anything hotter than an ice cube, as Eliezer described in spxtr's linked article: http://lesswrong.com/lw/o5/the_second_law_of_thermodynamics_and_engines_of/ Basically, if you, say, try to use the information about the water and a Demon to put the system in thermal equilibrium with some warm water and electricity, you'll either be prevented by conservation of energy or you'll wind up not using all the information at your disposal. And if you don't use the information it's as if you didn't have it. The salient point is that the system is not in thermal equilibrium with anything 'warmer' than "Ice cube + free energy." If you know everything about the cup of tea, it really is at absolute zero, in the realest sense you could imagine.
Hm. I have to think more about this.
Expanding on the billiard ball example: lets say one part of the wall of the pool table adds some noise to the trajectory of the balls that bounce off of that spot, but doesn't sap energy from them on average. After a while we won't know the exact positions of the balls at an arbitrary time given only their initial positions and momenta. That is, entropy has entered our system through that part of the wall. I know this language makes it sound like entropy is in the system, flowing about, but if we knew the exact shape of the wall at that spot then it wouldn't happen. Even with this entropy entering our system, the energy remains constant. This is why total energy is a wonderful macrovariable for this system. Systems where this works are usually easily solved as a microcanonical ensemble. If, instead, that wall spot was at a fixed temperature, we would use the canonical ensemble.
Again, this is very different from the situation with entropy. I think you're confusing two meanings of the word 'model'. It's one thing to have an incomplete description of the physics of the system (for instance, lacking nuclear forces, as you describe). It's another to lack knowledge about the internal microstates of the system, even if all relevant physics are known. (In the statistics view, these two meanings are analogous to the 'model' and the 'parameters', respectively). Entropy measures the uncertainty in the distribution of the parameters. It measures something about our information about the system. The most vivid demonstration of this is that entropy changes the more you know about the parameters (microstates) of the system. In the limit of perfect microstate knowledge, the system has zero entropy and is at absolute zero. But energy (relative to ground state) doesn't change no matter how much information you gain about a system's internal microstates.
I understand what you are saying, but I am not convinced that there is a big difference. How would you change this uncertainty without disturbing the system? How would you gain this information without disturbing the system (and hence changing its energy)? EDIT: see also my reply to spxtr.
You have to define what 'disturbing the system' means. This is just the classical Maxwell's demon question, and you can most definitely change this uncertainty without changing the thermodynamics of the system. Look at http://en.wikipedia.org/wiki/Maxwell%27s_demon#Criticism_and_development Especially, the paragraph about Landauer's work is relevant (and the cited Scientific American article is also interesting).
Isn't all this just punning on definitions? If the particle velocities in a gas are Maxwell-Boltzmann distributed for some parameter T, we can say that the gas has "Maxwell-Boltzmann temperature T". Then there is a separate Jaynes-style definition about "temperature" in terms of the knowledge someone has about the gas. If all you know is that the velocities follow a certain distribution, then the two definitions coincide. But if you happen to know more about it, it is still the case that almost all interesting properties follow from the coarse-grained velocity distribution (the gas will still melt icecubes and so on), so rather than saying that it has zero temperature, should we not just note that the information-based definition no longer captures the ordinary notion of temperate?

Not Quite the Prisoner's Dilemma

Evolving strategies through the Noisy Iterated Prisoner's Dilemma has revealed all sorts of valuable insights into game theory and decision theory. Does anyone know of any similar tournaments where the payouts weren't constant, so that any particular round might or might not qualify as a classic Prisoner's Dilemma?

Do you have a link for the original tournament?
There have been many Iterated Prisoner's Dilemma tournaments; at least a couple were done here on Less Wrong. Most such tourneys haven't included noise; to find out about the ones that did, try googling for some combination of the phrases "contrite tit for tat", "generous tit for tat", "tit for two tats", "pavlov", and "grim".
Has there been research on Prisoner's Dilemma where the players have limited amounts of memory for keeping track of previous interactions?
Google gives these: http://www.pnas.org/content/95/23/13755.full.pdf http://www.icmp.lviv.ua/journal/zbirnyk.79/33001/art33001.pdf http://www.complex-systems.com/pdf/19-4-4.pdf https://editorialexpress.com/cgi-bin/conference/download.cgi?db_name=ASSET2007&paper_id=287 http://ms.mcmaster.ca/~rogern4/pdf/publications_2009/annie_ltm.pdf
That question's potentially ambiguous: does "previous interactions" mean previous moves within a single game, or previous games played? If the former, quite a bit of research on the PD played by finite state machines would fit. If the latter, Toby Ord's work on the "societal iterated prisoner's dilemma" would fit.

Any worthwhile reading post that isn't found on the Sequences? (http://wiki.lesswrong.com/wiki/Sequences)

I recommend this one http://lesswrong.com/lw/iri/how_to_become_a_1000_year_old_vampire/ although I've read it a long time ago - I may have a different opinion on it currently. Re-reading it now.

Can it be non-LW material? I found this to be an excellent no-background-needed introduction to AI. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Is there an eReader version of the Highly Advanced Epistemology 101 for Beginners sequence anywhere?

Public voting and public scoring

I am sure this has been debated here before but I keep dreaming of it anyway. Let's say everyone's upvotes and downvotes were public and you could independently score posts using this data with your own algorithm. If the algorithms to score posts were also public then you could use another users scoring algorithm instead of writing your own (think lesswrong power-user).

As a simple example, lets say my algorithm is to average the score of user_Rational and user_Insightful and user_Rational algorithm is just lesswrong regular... (read more)

Currently, the backlog to changing the codebase here is so big and there's so little work going on it that even if there was a consensus for this change it would be unlikely to happen. More specific to this proposal, there are at least two problems with this idea: First: it could easily lead to further group think: Suppose a bunch of Greens zero out all voting by certain people who have identified as Blues and a bunch of Blues do the same. Then each group will see a false consensus for their view based on the votes. Second, making votes public by default could easily influence how people vote if they are intimidated by repercussions for downvoting high-status users or popular arguments, or even just not downvoting because it could make enemies.
Yeah, I suspect this would just move the game one step more meta. Instead of attacking enemies by mass downvoting now people would attack their enemies by public campaigns based on alleged patterns in the targets' votes. Then we could argue endlessly about what patterns are okay or not okay.
I agree there still would be very easy ways to punish enemies or even more common 'friends' that don't toe the line. I do think it would identify some interesting cliques or color teams. The way I envision using it would be more topic category based. For instance, for topic X I average this group of peoples opinions but a different group on topic Y. On the positive side, if you have a minority position on some topic that now would be downvoted heavily you could still get good feedback from your own minority clique.

General question: I've read somewhere that there's a Bayesian approach to at least partially justifying simplicity arguments / Occam's Razor. Where can I find a good accessible explanation of this?

Specifically: Say you're presented with a body of evidence and you come up with two sets of explanations for that evidence. Explanation Set A consists of one or two elegant principles that explain the entire body of evidence nicely. Explanation Set B consists of hundreds of separate explanations, each one of which only explains a small part of the evidence. Assum... (read more)

The B approach to Occam's razor is just a way to think carefully about your possible preference for simplicity. If you prefer simpler explanations, you can bias your prior appropriately, and then the B machinery will handle how you should change your mind with more evidence (which might possibly favor more complex explanations, since Nature isn't obligated to follow your preferences).

I don't think it's a good idea to use B in settings other than statistical inference, or probability puzzles. Arguing with people is an exercise in xenoanthropology, not an exercise in B.

Upvoted for
I'm not sure exactly what you mean by this. Do you mean that Bayesianism is inappropriate for situations where the data points are arguments and explanations rather than quantifiable measurements or the like? Do you mean that it shouldn't be used to prefer one person's argument over another's? In any case, could you elaborate on this point? I haven't read through much of the Sequences yet (I'm waiting for the book version to come out), but my impression was that using Bayesian-type approaches outside of purely statistical situations is a large part of what they are about. Not sure I understand this. Assuming you're both trying to approach the truth, arguing with others is a chance to get additional evidence you might not have noticed before. That's both xenoanthropology and Bayesianism.
Yes. I disagree. Look at our good friend Scott Alexander dissecting arguments. How much actual B does he use? Usually just pointing out basic innumeracy is enough "oh you are off by a few orders of magnitude" (but that's not B, that's just being numerate, e.g. being able to add numbers, etc.) I think the kind of stuff folks in this community use to argue/update internally is all fine, but I don't think it's a formal B setup usually, just some hacks along the lines of "X has shown herself to be thoughtful and sensible in the past, and disagrees w/ me about Y, I should adjust my own beliefs." This will not work with outsiders, since they generally play a different game than you. I think the dominating term in arguments is understanding social context in which the other side is operating, and learning how they use words. If B comes up at all, it's just easy bookkeeping on top of that hard stuff. -------------------------------------------------------------------------------- I don't understand what people here mean by "B." For example, using Bayes theorem isn't "B" because everyone who believes the chain rule of probabilities uses Bayes theorem (so hopefully everyone).
Seems they're referring to Bayesian Epistemology / Bayesian Confirmation Theory, along with informal variants thereof. Bayesian Epistemology is a very well respected and popular movement in philosophy, although it is by no means universally accepted. In any case, the use of the term "Bayesian" in this sense is certainly not limited to LessWrong.
Do you mean your prior for A is about your prior for B, or your priors for each element are about the same? If you mean the first, then there is no reason to favor one over the other. Occam's razor just says the more complex explanation has a lower prior. If you mean the second, then there is a very good reason to favor A. If A has n explanations, B has m, all explanations are independant and of probability p, then P(A) = p^n and P(B) = p^m. A is exponentially more likely than B. In real life, assuming independence tends to be a bad idea, so it won't be quite so extreme, but the simpler explanation is still favored.
I think you'll get somewhere by searching for the phrase "complexity penalty." The idea is that we have a prior probability for any explanation that depends on how many terms / free parameters are in the explanation. For your particular example, I think you need to argue that their prior probability should be different than it is. I think it's easier to give a 'frequentist' explanation of why this makes sense, though, by looking at overfitting. If you look at the uncertainty in the parameter estimates, they roughly depend on the number of sample points per parameter. Thus the fewer parameters in a model, the more we think each of those parameters will generalize. One way to think about this is the more free parameters you have in a model, the more explanatory power you get "for free," and so we need to penalize the model to account for that. Consider the Akaike information criterion and Bayesian information criterion.
This is a good question, but not when applied to the origin of the Torah example. There a more appropriate discussion is of the motivated cognition of the original Talmudic authors, who would have happily attributed 100% of the Torah to the same source, were it not for the 8 verses which do not fit. For a Christian these authors are already suspect because they denied the first coming of the Messiah, so one's priors of their trustworthiness should be low to begin with.

I have a slate of questions that I often ask people to try and better understand them. Recently I realized that one of these questions may not be as open-ended as I'd thought, in the sense that it may actually have a proper answer according to Bayesian rationality. Though, I remain uncertain about this. The question is actually quite simple and so I offer it to the Less Wrong community to see what kind of answers people can come up with, as well as what the majority of Less Wrongers think. If you'd rather you can private message me your answer.

The question is:

Truth or Happiness? If you had to choose between one or the other, which would you pick?

I don't think this question is sufficiently well-defined to have a true answer. What does it mean to have/lack truth, what does it mean to have/lack happiness, and what are the extremes of both of these? If I have all the happiness and none of the truth, do I get run over by a car that I didn't believe in? If I have all the truth but no happiness, do I just wish I would get run over? Is there anything to stop me from using the truth to make myself happy again? Failing that is there anything that could motivate me to sit down for an hour with Eliezer and teach him the secrets of FAI before I kill myself? This option at least seems like it has more loopholes.
I admit this version of the question leaves substantial ambiguity that makes it harder to calculate an exact answer. I could have constructed a more well-defined version, but this is the version that I have been asking people already, and I'm curious how Less Wrongers would handle the ambiguity as well. In the context of the question, it can perhaps be better defined as: If you were in a situation where you had to choose between Truth (guaranteed additional information), or Happiness (guaranteed increased utility), and all that you know about this choice is the evidence that the two are somehow mutually exclusive, which option would you take? It's interesting that you interpreted the question to mean all or none of the Truth/Happiness, rather than what I assumed most people would interpret the question as, which is a situation where you are given additional Truth/Happiness. The extremes are actually an interesting thought experiment in and of themselves. All the Truth would imply perfect information, while all the Happiness would imply maximum utility. It may not be possible for these two things to be completely mutually exclusive, so this form of the question may well just be illogical.
Defining happiness as "guaranteed increased utility" is questionable. It doesn't consider situations of blissful ignorance, where 1. We can't seem to agree whether being blissfully ignorant about something one does not want is a loss of utility at all 2. If that does count as a loss of utility, utility would not equate to happiness because you can't be happy or sad about something you don't know about.
For simplicity's sake, we could assume a hedonistic view that blissful ignorance about something one does not want is not a loss of utility, defining utility as positive conscious experiences minus negative conscious experiences. But I admit that not everyone will agree with this view of utility. Also, Aristotle would probably argue that you can have Eudaimonic happiness or sadness about something you don't know about, but Eudaimonia is a bit of a strange concept. Regardless, given that there is uncertainty about the claims made by the questioner, how would you answer? Consider this rephrasing of the question: If you were in a situation where someone (possibly Omega... okay let's assume Omega) claimed that you could choose between two options: Truth or Happiness, which option would you choose? Note that there is significant uncertainty involved in this question, and that this is a feature, rather than a bug of the question. Given that you aren't sure what "Truth" or "Happiness" means in this situation, you may have to elaborate and consider all the possibilities for what Omega could be meaning (perhaps even assigning them probabilities...). Given this quandary, is it still possible to come up with a "correct" rational answer? If it's not, what additional information from Omega would be required to make the question sufficiently well-defined to answer?
2Adam Zerner9y
Great question! I'm glad you brought it up! Personally, it's a bit of an ugh field for me. And is something I'm confused about, and really wish I had a good answer to. To me, this get's at a more general question of, "what should your terminal values be?". It is my understanding that rationality can help you to achieve terminal values, but not to select them. I've thought about it a lot and have tried to think of a reason why one terminal value is "better" or "more rational" than another... but I've pretty much failed. I keep arriving at the conclusion that "what should your terminal values be?" is a Wrong Question, which becomes pretty obvious once it's dissolved. But at the same time... it's such an important question that the slightest bit of uncertainty really bothers me. Think of it in terms of expected value - a huge magnitude multiplied by a small probability can still be huge. If I misunderstood something and I'm pursuing the wrong terminal goal(s)... well that'd be bad (how bad depends on how different my current goals are from "the real goals"). I'd love to hear others' takes on this. It appears that people live their lives as if things other than Your Happiness matter. Like Altruism and Truth. Ie, people pursue terminal values other than their own happiness. Is this true? I've really be interested in seeing a LW survey on terminal goals.
Truth is a tool. If it can't be used to fulfill my goal of happiness, what good is it? That being said, if you just meant my happiness, then I'd take truth and use it to increase net happiness.
Hey it's a good question. I'd pick Happiness. When I was much younger I might have said Truth. I was a student of physics once and loved to repeat the quote that the end of man is knowledge. But since then I have been happy, and I have been unhappy, and the difference between the two is just too large.

What app does less wrong recommend for to-do lists? I just started using Workflowy (recommended from a LW friend), but was wondering if anyone had strong opinions in favor of something else.

P.S. If you sign up for workflowy here, you get double space.

EDIT: The above link is my personal invite link, and I get told when someone signs up using it, and I get to see their email address. I am not going to do anything with them, but I feel obligated to give this disclaimer anyway.

It depends on why I'm making the list. If I'm making a todo list for a project I'm working on, Workflowy is good because its simple and supports hierarchical lists. For longer lived stuff where I add and delete stuff like grocery/shopping lists or books to read, I use wunderlist because they have an android app, a standalone windows app and it looks pretty. Browser-based apps annoy me so I like the windows app and the android app is nice to have when I'm actually in the grocery store. When I'm making a list because I need to be productive and not as a way to plan, I use a paper todolist: http://www.amazon.com/gp/product/B0006HWLW2/ref=oh_aui_detailpage_o08_s00?ie=UTF8&psc=1. Checking things off on paper does wonders for productivity and having the printed thing helps set the mood.
I use a paper notebook, inspired by bullet journal and autofocus for daily/weekly goals when the list stays under 20 or so items. Recently a project started ballooning into more items than this system could handle, so I picked up todo.txt a month ago. I've been very happy with it so far. The system works with just a regular text editor and keeping all the lines in the file lexically sorted, but it's also a markup format that can be used with specific tools. I keep the project-specific list synced with a symbolic directory link from the project directory tree to Dropbox, and currently use the Simpletask app to update the list on my phone. Seems to work well for everything I need.
I've tried a bunch, but Todoist is the only one that's powerful, flexible, quick, and easy enough for me to want to use.
I like Complice for having a daily to-do that allows you to track how much time you've spent on each of your items (if you're using its pomodoro timer), and to see which goals you did (and didn't) meet on past days. However, I know the founder through CfAR so I may be biased.
I've found success with OmniFocus.
I'm using workflowy as well, and it's the only to-do list software I've ever actually used for more than a few days. One feature that I've wanted for a while is dependencies. Let's say you need to print out a form, but you need to purchase printer ink first. Being able to hide "print out form for xyz" until "buy printer ink" is completed would be great.

Could use an editor or feedback of some kind for a planned series of articles on scarcity, optimization, and economics. Have first article written and know what the last article is supposed to say, and will be filling in the gaps for a while. Would like to start posting said articles when there is enough to keep up a steady schedule.

No knowledge of economics required, but would be helpful if you were pretty experienced with how the community likes information to be presented. Reply to this comment or send me a message, and let me know how I can send you the text of the article (only one at present).

One one hand, gorillas are crucially important for the seed dispersion that maintains forests, so we need to save them from ebola, even if only for the human benefit that can be gained from those forests. On the other hand, ebola is killing humans, too. There's disagreement on how to allocate research funding.

My feeling is that gorillas are pretty important just because they are apes (for practical research purposes, although I think they have a fair degree of intrinsic value too). Seed dispersion seems the least of these benefits. (On the other hand, I suppose the existence of other apes poses a disease threat to humans). We should really demand more funding for research, in general. Under-funding research may be the single most irrational thing we do as a society, considering the return on investment.