If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
216 comments, sorted by Click to highlight new comments since: Today at 1:09 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Most people do not consume enough potassium. The RDA for potassium is high, and potassium deficiency seems to cause bad things like strokes. You'd need to eat ~8 bananas a day to satisfy your RDA (which isn't that surprising - the dastardly banana lobby has tried to cast bananas as high-potassium, but e.g. tomatoes have more). And excess potassium probably isn't very dangerous. Also, someone on LW (Kevin?) reported a nootropic effect from supplementing potassium.

Most people consume too much sodium. (There's been some uncertainty around whether excess sodium is actually bad, but it still seems clear that we consume more sodium than we need.)

Potassium and sodium can both be eaten in salts, which will taste pretty similar. Therefore, perhaps we could make health gains by replacing much of our table salt with potassium! Indeed, some people have to do this for health reasons, so the great machinery of capitalism has already done lots of work for us here. For instance, here you can buy 12x3oz of potassium salt (enough to last more than a year) in shakers for $15. I've been trying this out for a while and it tastes almost like normal salt.

I don't know much detail about nutrition, though, so this may be stupid for reasons I haven't thought of. Could someone who knows more about the relevant science please weigh in?

Most people do not consume enough potassium.

I find this claim a bit weird considering that only a very small minority of my patients (geriatric, often poor nutrition) are hypokalemic while not receiving any supplements.

Who's RDA is that and how was it determined? How strong is the evidence for it?

The fact that you have patients means you might know more about this than me. That said, Wikipedia [http://en.wikipedia.org/wiki/Potassium#In_diet] seems pretty confident that we're not meeting the DRI, states that increased potassium intake improves health, and says only that the correct intake is "debated". It's important to distinguish between inadequate intake and suboptimal intake. I'd wager the medical establishment only refers to the former as hypokalemia. There's also a good argumentum ad savannam africanus that can be made here. Plants contain lots of potassium and little sodium. (If you put plant matter in a pot and burn it, you'll be left with a white layer of "pot ash", hence "potassium".) Nowadays, people don't eat many plants, and we pour sodium on everything.
Yes. It could also be that the potassium and sodium concentrations don't vary much no matter what your consumption, but your kidneys have to work harder to maintain the balance which could have health effects. I don't see much hypernatremia either although sodium is way overconsumed. I think this is because water follows sodium and therefore as you retain sodium you retain water in the extracellular volume. Therefore the measurable concentrations don't change although you have excess sodium in your system. I think potassium and water don't interact the same way because potassium is mostly intracellular and cells can't stretch arbitrarily whereass the extracellular volumes can. Optimal potassium intake probably varies hugely depending on your sodium intake since they interact inseparably in the human body.
africanam, surely? (I'm assuming savanna is feminine, as 1st declension Latin nouns generally are. Actually it appears that in the 16th century there was a Latin word zauana with that meaning, but sticking with a more recognizable form is probably better.)
I stole that phrase from here [http://lesswrong.com/lw/gdl/my_simple_hack_for_increased_alertness_and/8ayz], which has some discussion of the grammar. I don't know any Latin.
Looks to me like the conclusion of that discussion, in so far as it had one, was that "africanam" is right.
There is a serious inconsistency in the literature on potassium. The original RDA was set to be the average consumption about 30 years ago. But people trying to reach it today find it very difficult. This might be because diets have changed, but is probably because people reassessed how much potassium food contains and failed to update numbers like the RDA that are based on the erroneous measurement.
average intake in the US: 3.4g/day optimal intake for health according to the Cochrane review on sodium consumption: 3g/day not excessively bad. Negative health effects start cropping up above the 4g and below the 2g range.
Is there a straightforward way to know your average sodium intake?
Not really unless you actually chart your foods for a bit. Which can be laborious. Worth it though to get it through to system 1 how bad your current diet is (if it is about as bad as average).
I'm no sure how bad my system 1 happens to be on the salt question. On some days my system 1 feels like more salt is warranted and I follow that intuition. I'm also not sure that the RDA is +/- 25% accurate for myself given that I sometimes drink 4 liter of water per day and probably need more salt consumption than the average person on those days. Charting my food doesn't easily tell me how much of the salt I put into the water to cook pasta makes it into my pasta.
Anecdotally, some people seem a lot more able to taste the difference than others. So-called "lite salt" (50/50 sodium chloride and potassium chloride) tastes almost identical to commodity table salt to me, although it doesn't have the complexity of unrefined sea salt; but that isn't true for everyone. An option for those who don't want to replace table salt in their diets might be to supplement with potassium chloride in pill form. Capsules and capsule fillers are fairly cheap on Amazon. Oddly, this doesn't seem well covered by the existing supplement market; potassium supplements do exist but the doses they provide are ridiculously small (single-digit percentages of FDA allowances, if you're lucky).
they taste pretty different.
http://www.ncbi.nlm.nih.gov/pubmed/18710605 [http://www.ncbi.nlm.nih.gov/pubmed/18710605] seems to be published article in support of your claim. It's from a Chinese source so results can be interpreted with a grain of salt ;) http://ajcn.nutrition.org/content/22/4/464.abstract [http://ajcn.nutrition.org/content/22/4/464.abstract] is a Western source that also comes to the conlusion that potassium salt is good. Both sources recommend substitution with a miss of both salts. Given that nutrition is always about being in balance the idea of mixing seems good. That especially true as from what I can see potassium consumptions leads to increased sodium secretion so if you supplement potassium you might need more sodium than the average person. However some people with renal failure to get problems through increased potassium consumption http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1124926/ [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1124926/]. Potassium also does seem to interfere with some blood pressure medications so someone who's on medication for high blood pressure shouldn't do this without speaking with his doctor. I'm not 100% sure but I remember faintly that someone on LW got into problems by taking potassium as a nootropic.
I remember this too.
You're not thinking of gwern and magnesium [http://www.gwern.net/Nootropics#experiment-1]?
Potassium didn't work out too well for me earlier either: http://www.gwern.net/Zeo#potassium [http://www.gwern.net/Zeo#potassium] Kind of weird given how people generally seem to regard both potassium and magnesium as good to supplement. Someone mentioned that it might be important to keep the ratio of potassium:magnesium constant (so separate testing is not good) but I dunno how plausible that is.
I almost ordered the potassium salt (kalium chloride) from amazon.com but then I discovered Raab LowNat [http://www.amazon.de/gp/product/B004FPVU64/ref=s9_simh_gw_p325_d0_i1?pf_rd_m=A3JWKAKR8XB7XF&pf_rd_s=center-2&pf_rd_r=1ZZ3S66T37RAA5GSA4XP&pf_rd_t=101&pf_rd_p=455353807&pf_rd_i=301128] on the local Amazon (DE) which I think provides a better balance of potassium and sodium salt and other sea mineral salts too.
Everything [https://en.wikipedia.org/wiki/Hyperkalemia] in balance [https://en.wikipedia.org/wiki/Hyponatremia].
Sure, but by default we suffer from mild hypokalemia and mild hypernatremia, not the other way round. Obviously you shouldn't cut out all sodium, and unless you do something stupid you're not really at risk of that.
Healthy kidneys will simply remove the excess potassium unless the intake is ridiculously high. However the RDA could quickly get dangerous if you get kidney failure for any reason.
The fact that it's possible to have too much and it's possible to have too little does not tell you how much you need.

Has anyone posted about Seth Dickinson yet? I don't keep up on the open threads as much as I'd like, but my google-fu says no.

Last year I was blown away by a short story by Seth Dickinson called A Plant (Whose Name is Destroyed). Recently I went and checked out Seth Dickinson's other works. I've read over half of them now, and I gotta say - I STRONGLY recommend this author. Many of his works have a very strong transhumanist message, and some could be called rationalist. I'm kinda surprised I haven't already heard his name brought up on LessWrong, or SlateStarCodex, or /r/rational. I'm fixing that this week.

A few of my favorite stories:

Economies of Force - A post-GAI story where humanity made AI that almost captures our values, but not quite, and it results in the sort of utopia you might expect from that sort of failure. Shades of Amputation of Destiny and Bostrom's Empty Disneyland. If anyone can figure out the significance of the name "Loom", please let me know. It must have been chosen for a reason, but I'm not making the connection.

Sekhmet Hunts the Dying Gnosis: A Computation - A rather literal take on Meditations on Moloch, and/or An Alien God

Morrigan in the Sunglar... (read more)

I play Destiny, which is a popular mainstream game. At one point in the story a pair of researchers, investigating a captured mechanical enemy, discover to their horror that it is flawlessly simulating the entire lab, down to the conversation that they are having. They proceed to hash out the whole "torturing simulated versions of self as hostages" concept. It was pretty cool to see a concept I've always thought was pretty rationalist out in the wild, as it were.

Full story here: http://destinygrimoire.com/grimoire-cards.html

Scroll down till you find: Ghost Fragment: Vex

It occurs to me that a large part of my rejecting theism (Christianity) had nothing to do with the claims of religion itself, but rather was based on a study of human psychology and cognition. That is, while my study of the historical evidence for Christianity did help assign low probabilities to traditional biblical accounts like Noah's Ark, the Exodus from Egypt, Jesus' resurrection, etc., the nail in the coffin seems to be my observation of the tendency humans possess to believe in some sort of religion, regardless of the particular details.

I've noticed this in the area of conspiracy theorists as well. The biggest reason I tend not to accept traditional conspiracy theories (9/11 was planned by insiders, multiple shooters in Dealey Plaza, etc.) is unrelated to the details of the particular theory. My biggest reason for rejecting conspiracies is my observation that humans are prone to believing them.

I'm wondering how Bayesian probability treats this sort of 'evidence'—evidence that is unrelated to the objective details of the question at hand.

Anyone wanna explain?

I think it's related to this: http://en.wikipedia.org/wiki/Reference_class_problem [http://en.wikipedia.org/wiki/Reference_class_problem] Once you can categorize a theory into a certain reference class you use your priors for that reference class rather than for similarly improbably problems of a more generic nature
Huh, this is the only reasonable reply in the whole thread and it is largely ignored.
First of all, in the absence of any real evidence, conspiracy theories and religion can mostly be rejected simply on the basis of priors. However, as a general rule, many people believing something, especially if it's a vast majority, is decent evidence for a claim, and so the fact that many people believe something should cause you to update your subjective probability upward, to some extent. The important point here is that the strength of that evidence has a very significant dependence on how people attained those beliefs. If you were to find that those beliefs were attained via some generally reliable method, e.g. scientific experimentation and dissemination via broad scientific consensus, this would cause you to further update your subjective probability upward, because it would be a strong sign that their beliefs correlate with reality. On the other hand, if you were to find that their beliefs were obtained by, for example, using a ouija board, you would update your subjective probability back down because the evidential value of the other person's beliefs would fall to pretty much zero. Basically, finding out the process by which people obtained their beliefs partly or wholly screens off [http://lesswrong.com/lw/lx/argument_screens_off_authority/] the evidential weight of the fact that they have those beliefs.
If many people got the same belief from Ouija boards independently, I'd update my belief in Ouija boards. Conversely, if millions of people believe the result of a single poorly designed study, that does not make their number very relevant. I think what should undermine belief in religions or conspiracy theories is that these people all read the same books and watch the same YouTube videos.
I envisaged them all in the same room using the same Ouija board at once, but that image isn't really the most obvious interpretation of what I said. Nevertheless, you're not right about the Ouija board case. First of all, it also depends on how many different beliefs could result from the Ouija boards, and how many people ended up with different beliefs. For example, if 40% of the world's population independently used an Ouija board to conclude the truth of a certain religion, and another 40% of the world's population independently used an Ouija board to conclude the falsity of that religion, this would not be significant evidence despite the large numbers involved. On the other hand, if 80% of people tried it and 79% of people ended up with the same belief, then you definitely need to take a significant look at Ouija boards and how people are using them and try to figure out what's going on there. However, I wouldn't really take it as evidence in favour of Ouija boards, because while systematically inducing certain beliefs is a strong sign of precision, it is not any kind of sign of accuracy. Unless you have other reasons to support accuracy of Ouija boards, such as a well-understood causal mechanism, or being consistently correct about various known facts (that are unknown to the test subjects) in double-blind experiments, you cannot use them as evidential support for either belief. Your point about the importance of independence between people is an important one, but I think you still have to pay significant attention to the specific nature of the method by which the belief was attained. If, as in your example, millions of people believe due to exact same study, your evidence now consists of a single scientific study which managed to become very widely known and believed, as compared to many studies which do not. Both of these factors are correlated with truth, but the a priori weight of millions of people has been screened off; relative to that prior wei
In a Bayesian framework, you are simply assigning a higher prior probability to x believes that A because of cognitive bias B than x believes that A because A is true and through causal mechanism C, is determining x's belief in A, or some similar set of hypothesis. As long as these priors approximately reflect base rates (in addition to object level arguments you have examined for yourself), it seems like a decent way to go about things to me.
People are more likely to believe true things, so someone believing something is evidence that it's true. If you find out that they're especially likely to believe this even if it's not true, but not proportionately more likely to believe it if it is, then the fact that they believe it is not as strong evidence. Thus, if it's a given that they believe it, finding out that they'd believe it either way is evidence against it.
I'd throw in a modifier that people are most likely to believe true things about areas where they have direct experience and get feedback. It's something like near and far, and the near has to be very near. Give extra points if the experience is recent. The less a theory meets those constraints, the less you should think belief is evidence that it's true.
How do you know this?
It's an implicit assumption that you have to make before you can get anywhere, like modus ponens. From there, you can refine your beliefs more.
Modus ponens can be demonstrated to be a valid assumption by drawing up a truth table. How do you demonstrate that "people are more likely to believe true things"?
Using truth tables seems more complicated than modus ponens. I would expict it would be better to use modus ponens to justify truth tables as opposed to the other way around. Regardless, you need to start with something. You can't justify modus ponens to a rock. If you don't think people are more likely to believe true things, then how do you justify any of that stuff you just said being true?
People tend not to believe things because they're true, but for some other reason. Pr(People Believe | True) < Pr(People Believe | Some other explanation)? I would hazard to guess that the number of untrue things people have believed all throughout human history overshadows the number of things they (we) have believed that were actually true. It's a bit of an ad hominem, but logical fallacies can be viewed as weak Bayesian evidence [http://lesswrong.com/lw/aq2/fallacies_as_weak_bayesian_evidence/].
If you reject theories based on humans believing in them you can't believe in any theories.
Hm. I feel like that's oversimplifying it. I don't reject theism, for instance, because people believe it. Rather, I've noticed the lynchpin of my disbelief seems to have as much to do with what I've learned about why people tend to believe in things like religion as it does an evaluation of the actual claims of any given religion.
What do you consider the reason people believe in conspiracy theories?
All but the most ignorant believe in conspiracy theories. For example, I believe that Julius Caesar was assassinated by a conspiracy in the Roman Senate, that a conspiracy in the Thai Army led to the coup in May of this year, that most of the narcotics consumed in the United States are sold by illegal, semi-secret networks of supply and distribution, and so on. But none of these are called "conspiracy theories." Rather, a "conspiracy theory" is a conspiracy theory that the speaker wishes to ridicule. For example, 9-11 "Truthers" are mocked for believing in a conspiracy theory, but the correct version of the story is also a conspiracy theory - but a conspiracy by an al-Qaeda cell, not the US government.
I don't think this is accurate. What characterizes the things generally called "conspiracy theories" is not only that the people talking about them want to ridicule them. They also tend to have the following features not widely shared by more credible theories with conspiracies in: * They have very little evidence directly supporting them. (Advocates tend to focus on alleged evidence against rival "mainstream" theories.) * They involve large conspiracies, with many people with varied interests, successfully keeping a big secret unexposed, even though it would take only a small slip-up or leak for it to get out. * They require those many people to be consistently villainous in ways there's little reason (outside the conspiracy theory) to think they are. So, for instance, "9-11 truthers" can't (e.g.) point to leaked government memos saying "let's fly planes into buildings and say it was terrorists"; rather, AIUI they argue that (1) the "usual" explanations are no good because being hit by a plane can't actually cause a building to collapse as the WTC towers did and (2) that means the government is covering something up so they probably planned it all. This theory requires that a whole lot of people in the US government knowingly betrayed their country and killed thousands of innocent people, and did it without getting caught, and no one involved blew the whistle.
Obviously, any false theory isn't going to be adequately supported by the facts. But I dispute that they necessarily have any pattern of features, and I suggest any apparent pattern is more read in by people trying to denigrate. For example, why would a 9-11 conspiracy require a massive number of government operatives to know? Obviously, it could have been carried out by a small terrorist cell taking over commercial airliners - a conspiracy only requires a single government agent telling them to do it. Now, in the absence of specific information, Ockhams Razor should tell us that the government agent is superfluous, but I suppose it depends on your priors. Governments have been known to do things like this [http://en.wikipedia.org/wiki/Lavon_Affair]. The US government does have secretive programmes [http://en.wikipedia.org/w/index.php?title=PRISM_(surveillance_program]. Suppose a large bomb went off tomorrow in central Moscow, and the government blamed "Galician fascist terrorists." Due to my priors, I would give a high probability to it being an inside job, so if your priors for the USG are sufficiently faulty as to equate it with Russia, you might be so foolish as to become a 9-11 truther. I also think you are unfair because when "conspiracy theories" get good evidence, they stop being called such. For example, it was a conspiracy theory to claim that the British Communist Party was secretly in the pay of Moscow, right up to the minute the Kremlin archives were opened. Then it just became historical fact. So there's a selection bias at work. Conspiracy theories become pathological when absence of evidence is taken as evidence of a cover up. But legitimate belief in conspiracy theories normally comes down to priors. It is equally pathological to say you don't believe in conspiracy theories, but then claim to be unsurprised by e.g. the Snowden revelations. If the NSA secretly undermining public cryptography without anyone finding out was part of your model all al
Maybe I'm misusing the terminology somehow, but I wouldn't regard a theory that says the September 11 attack was carried out by a generic terrorist group asked to do it by a single rogue government official acting alone as a "conspiracy theory", and I don't think that's close to what "9-11 truthers" mostly think. (Also, I'm not sure how it would work. Most terrorist organizations don't take instructions from random rogue government officials.) Isn't the usual "truther" story that the US government -- meaning something like "the President, some of his senior staff, and enough people further down to make it actually happen" -- were responsible, with the goal of justifying an invasion of Iraq or stirring up support for the administration on the back of fear and anger, or something like that? (Maybe I'm misunderstanding what you mean by "a single government agent"?) Yes, you might. Were you expecting me to disagree? My claim isn't that (what are commonly called) conspiracy theories are all so insane that no one could embrace them unless seriously mentally disordered. It's that they have enough features in common, other than being disapproved of by the person mentioning them, that "conspiracy theory" isn't a mere term of abuse. (For my part, though my opinion of the Russian government is extremely negative, I would not at all expect it to start massacring random Russian citizens in order to manufacture outrage against "Galician fascist terrorists", not least because they'd be likely to get caught and I'd expect them not to want that.) I agree that there's (so to speak) an evaluative element in the term "conspiracy theory". But I don't think it's what you say it is (i.e., that the only difference is whether the person using the term wants to ridicule the theory in question). It's more like the evaluative element in the term "murder". You don't call a killing a murder if you think it was justified, but that doesn't mean that "murder" just means "killing the speaker di
Thank you for an interesting reply. I think the way you use "conspiracy theory" is quite a good one*, but somewhat non-standard. In particular, you state that ideas described as "conspiracy theories" are sometimes correct. I think Brillyant (to whom I was originally replying) gives a much more standard description when he calls them "ludicrous" and "absurd." For example, wiki states that the phrase: If the ordinary connotation of "conspiracy theory" was "low-probability hypothesis involving a conspiracy" I would not have objected to its use. *although I think that the "evil" part needs work.
It looks to me as if Brillyant is using the term to mean something close to "ludicrous, absurd theory involving a conspiracy". I remark firstly that this isn't so far from "low-probability hypothesis involving a conspiracy" and secondly that it's entirely possible that Brillyant hasn't sat down and thought through exactly what shades of meaning s/he attaches to the term "conspiracy theory", and that on further reflection s/he would define that term in a way that clearly doesn't amount to "theory I want to make fun of". I appreciate that you're concerned about equivocation where someone effectively argues "this is a conspiracy theory (= theory with a conspiracy in), therefore it should be treated like a conspiracy theory (= ludicrous absurd theory with a conspiracy in)", but I don't see anyone doing that in this thread and given how firmly established the term is I don't think there's much mileage in trying to prevent it by declaring that "conspiracy theory" simply means "theory with a conspiracy in". (In particular I don't think Brillyant is engaging in any such equivocation. Rather, I think s/he is, or would be after more explicit reflection, saying something like this: 1. People like to believe in conspiracies. 2. Therefore, the fact that a theory is believed by quite a lot of people is less evidence when the theory features a conspiracy than it normally would be. 3. So when someone offers up a theory that isn't terribly plausible on its face and that involves a conspiracy, my initial estimate is that it's unlikely to be true; the best explanation of the fact that I'm being invited to consider it is that its advocates have fallen prey to their inbuilt liking for theories involving conspiracies. -- This doesn't oblige Brillyant to disbelieve every theory with a conspiracy in, because some actually have good evidence or are highly plausible for other reasons. Those tend not to be the ones labelled "conspiracy theory".)
I think you misunderstand my concern; perhaps I have not been clear enough. I am not so much worried about equivocation, as I am worried by precisely the 3-step process which you describe. And I am particularly worried about people going through that process, labelling something a "conspiracy theory," then the theory turns out to be true, and they never reassess their premises. Let's restate your process in more neutral language. * For reasons of specialisation, partial information, etc, I treat the fact that lots of people believe in a theory as partial evidence in its favour. * Some people have a higher prior than me for the existence of conspiracies. * Therefore if a theory involving a conspiracy is believed by quite a lot of people, it may be that this belief is due to their higher prior for conspiracies, not any special knowledge or expertise that I need to defer to. * Therefore I treat their belief in the theory as less evidence than normal, on the basis that if I had the evidence/expertise/etc that they do, I would be less likely than them to conclude that there is a conspiracy. * So if someone offers an implausible-seeming theory to me involving a conspiracy, I discount it and conclude that its advocates just have a high prior for conspiracies. Suddenly, this doesn't look like a sound epistemological process at all. Steps (1) and (2) are fine, but (3), (4) and (5) go increasingly off the rails. It looks like you are deliberately shielding your anti-conspiracies prior, by discounting (even beyond their initial level of plausibility) theories that might challenge it. And if, on those occasions that a conspiracy is eventually proven, you refuse to update your prior on the likelihood of conspiracies (by insisting that such-and-such a theory doesn't really count as a conspiracy theory, even though, at the time, you were happy to label it as such), then I would say that the process has become truly pathological, just as much as that of
(I'm not sure that #2 is the right formulation. A lot of people don't think in terms sufficiently close to Bayesian inference that talking about their "priors" really makes sense. I'm not sure this is more than nit-picking, though.) I agree that #3,4,5 "go increasingly off the rails" but I think what goes off the rails is your description, as much as the actual mental process it aims to describe. Specifically, I think you are making the following claims and blaming them on the term "conspiracy theory": * That when someone thinks something is a "conspiracy theory" they discount it not only in the sense of thinking it less likely than they otherwise would have, but in the stronger sense of dismissing it completely. * That they are then immune to further evidence that might (if they were rational) lead them to accept the theory after all. * That if the theory eventually turns out to have been right, they don't update their estimate for how much to discount theories on account of being suspiciously conspiracy-based. Now, I dare say many people do do just those things. After all, many people do all kinds of highly irrational things. But unless I'm badly misreading you, you are claiming specifically that I and Brillyant do them, and you are laying much of the blame for this on the usage of the term "conspiracy theory", and I think both parts of this are wrong. Yup. But the answer to that question is always yes, and therefore tells us nothing. (Mightn't a creationist's higher prior on the universe being only 6000 years old be part of their tacit knowledge and expertise? It might be, but I wouldn't bet on it.) I don't think the symmetry is quite there. People brought up in totalitarian countries who then move to liberal democracies see too many conspiracies. No doubt people brought up in liberal democracies who then move to totalitarian countries see too few, but it could still be that people brought up in totalitarian countries who stay there and peo
I remember September tenth, and if you'd said that to me then, I'm not sure I would have called it a conspiracy theory (I might have), but I certainly would have thought you were wildly overconcerned.
For sure. But you'd probably have said the same if I'd said that al-Qaeda terrorists were about to take over lots of planes and fly them into buildings, with thousands of lives lost. And yet that does in fact appear to have happened, and no one calls it a "conspiracy theory". So the fact that saying the day before that terrorists asked to do it by a single rogue government official were about to take over planes and fly them into buildings would have sounded wildly overconcerned and conspiracy-theory-ish can't make believing now that that's what happened a conspiracy theory. (For the avoidance of doubt: I do not in fact think that the people who flew planes into buildings on "9/11" were asked to do so by any official of any government, rogue or otherwise.)
So, in what way is what you are saying relevant to the debate we are having? The way you use the term "conspiracy theory" obviously isn't the way Brillyant uses it.
I am suggesting that the way he is using "conspiracy theory" amounts to a fnord . It doesn't appear to have much more content than "conspiracy theory I don't believe in" and as such is as much about him as it is about the theory. As.such I suggest that discussion is unlikely to be productive unless better terms are used.
I pointed out a couple particular theories that are (a) ludicrous and (b) have a significant number of people who believe them. I'd agree "conspiracies" happen all the time. But believing there were multiple gunmen in Dealey Plaza or that Bush ordered 9/11 is a special case of absurd belief.
So, basically you don't understand the argument he's making and therefore try to talk about something else?
Not entirely sure. There seems to be a pattern recognition malfunction that takes place. "Conspiracies" happen all the time (i.e. people lie, governments have covert programs, office politics, etc.) and people seem to want to avoid being naive about the "real" reasons and cause for significant events like 9/11 or the Kennedy assassination. It might just help to resolve cognitive dissonance between (a) powerful forces (like the gov't) are generally in solid control, pulling a lot of strings and not being entirely up front about it and (b) major stuff happens that the gov't couldn't/didn't prevent.
Also in both the examples above the official explanation is politically inconvenient for a lot of people. For example, people like to think of JFK as a left wing martyr, thus him being killed by a communist [https://en.wikipedia.org/wiki/Lee_Harvey_Oswald] is rather inconvenient to this narrative.
In 2013 there was a poll [http://www.publicpolicypolling.com/pdf/2011/PPP_Release_National_ConspiracyTheories_040213.pdf] about conspiracy theories. They provide contingency tables that show relationships between beliefs in various conspiracy theories and voting in 2012 US presidential election, political ideology, gender, party affiliation, race and age. Looking at these tables it seems that belief in JFK conspiracy is somewhat similar across the political spectrum, and, maybe surprisingly, it was very liberal people (and not very conservative ones) who were most likely to agree with the official explanation. Obviously, grouping all JFK conspiracy theories into one option loses a lot of information, as liberals and conservatives would probably differ in which ones they find the most appealing. Moreover, neither liberals, nor conservatives are homogeneous groups, and this poll does not show differences among the subgroups (e.g. geographical or some other kind) that might exist.
I wonder how many of them know Oswald was a communist.

I don't know if the moderator would consider this spamming, but my gang has a cryonics and life extension conference in the works at Don Laughlin's Riverside Resort in Laughlin, Nevada, on November 7-9. I invite LessWrongers to attend, You can get to Laughlin easily enough from anywhere on the West Coast. If you have to fly in, you need to get a flight to McCarran airport in Las Vegas, and rent a car or use an airport shuttle which runs between McCarran and Laughlin. Mr. Laughlin has worked with us to make the convention very affordable, and definitely more affordable than Alcor's comparable events in Scottsdale.

Mr. Laughlin built an airport across the Colorado River in Bullhead City, but only chartered commercial flights operate out of it, probably for high rollers, entertainers and their crews and such.

You can contact me for more information: mark.plus@rocketmail.com

I don't think it's spamming. Promotion of life extension research and knowledge exchange is in all our interest. Utilitarian analysis suggests that you post is a net win. However if you do the work to organise a conference telling people to request information via email is no good medium. Get a proper website to promote your event.

Is there any research into the optimal ratio of simulation (such as a soccer scrimage) to drills/single skill practice (such as dribbling practice) for best performance?

Cult like adherence to the words of a relatively uneducated man, frequent babbling in pseudo-scientific nonsensical terms. 'reinventing the wheel' and trying to claim their knowledge is completely unique and undiscovered (despite the complete opposite). A seeming opposition to mainstream science, and belief in an incredibly dangerous practice (cryonics) that they believe will lead them to a perfect afterlife (replace heaven with singularity and god with AI), and a leader that makes money from sitting around all day taking donations from his 'followers'. Oh and both Hubbard and Yudkowsky post long rambling author tracts that pretend to be a story....

An idea that I'm kicking about: intuitive organisational schemas for knowledge.

I've previously observed that it's easier to process and retain facts about places if you know where those places are on a map. Knowing the map gives you an intuitive schema to hang the information on. I am never going to have to navigate to Kiribati or the Pitcairn Islands, but I suspect some part of my brain which handles spatial orientation and navigation is putting in some overtime and helping me remember things about them.

I'm considering making an anki deck of people. Historical scientists, mathematicians and philosophers, and contemporary academic and industrial figures. It could include photos/portraits, their field, their active period, maybe their nationality, and specific theories, ideas or publications they're known for. Much like knowing where Kiribati is on the map gives me a mental "space" to put facts about Kiribati, perhaps having a personal sense of Tooby and Cosmides and Trivers and Dunbar are will give me a similar "space" to put facts about ev-psych.

In principle, this shouldn't be too hard. Without any assistance, most people remember details on hundreds of real-wo... (read more)

Sounds like dual-coding hypothesis [http://en.wikipedia.org/wiki/Dual-coding_theory]. TL;DR: more imaginable things are easier to remember because you're working two distinct memory subsystems (verbal, visual).
As far as I understand the world championships in mnemonics all get won via memory places that use spatial memory. At my present project of making phonemes learnable I specifically show the charts of phonemes along with my minimal pairs to give the user an orientation. Venn diagrams are a nifty tool for giving knowledge a structure. I use them in Anki to distinguish essential, semiessential and nonessential amino acids. I however haven't yet programmed automatic Venn diagram creation. The supermemo principles by Wozniak suggest against learning multiple similar items at the same time. If you have a deck about British monarch there nothing wrong with spreading it out at 1 card per day. As far as automatic remembering of detail about people goes, it often works via narratives. Narratives are difficult to display via Anki. I personally had a heard times following along the who's who of Game of Thrones on the other hand someone on LW has used the example of remembering facts about Game of Throne as a way to illustrate how easy it is to remember facts about certain subjects. .
I think this [http://lesswrong.com/lw/i9p/improving_enjoyment_and_retention_reading/9js2] is what you're talking about re: using Game of Thrones to remember facts. I do remember faces and people (and Game of Thrones characters) quite well, so I have fairly high hopes for this process, but it occurs to me that a lot of people on LW report face-blindness to varying degrees. This might not work so well for these people.
Yes, that the thread I meant. I have no problem with recognizing I meet in person again. I think the main issue is that I don't consume much fiction.
I've posted this before, but I think it's relevant here: See: How to Make a Complete Map of Every Thought you Think [http://www.speakeasy.org/~lion/nb/book.pdf] [pdf] An excerpt from the introduction (tldr: beware of eating yourself):
Weirdly enough, since that comment a month ago, I've been sucked into learning about the fission of Yugoslavia [https://en.wikipedia.org/wiki/Breakup_of_Yugoslavia], after years of knowing only the barest outlines of it — and I have noticed that knowing facts about the geography has helped me remember facts about the conflicts. I think it goes both ways, as well; knowledge of events helps cement knowledge of geography. Potentially useful [http://www.sciencemag.org/site/feature/misc/webfeat/gonzoscientist/episode14/index.xhtml].
As a kinetic learner I'm all about activities and doing things. I think reading over one card a day every day and reading over multiple ones would really help me learn something like that. There is also something to be said for life-long education including workshops, master classes, and such. That is much easier to do online now than ever before.

Can anyone recommend a good place to learn introductory human anatomy? I'm looking for a highish level overview of what organs make up different systems, how the different systems relate to each other? I know that I can hit wikipedia, but am looking for a packaged deal that will do a good enough job.

EDIT: Thanks for all the suggestions. I decided to pick up used, older editions of The Human Body Book for a good overview and Textbook of Medical Physiology for anywhere I want to delve into more detail.

If you like visuals, Crash Course has an entertaining and informative Biology playlist [https://www.youtube.com/playlist?list=PL3EED4C1D684D3ADF] which includes a subset on human biology. I would imagine any undergrad (or possibly even high school) introductory text would also revise major organ systems to the level you describe. Do you have any specific purpose or goal in mind?
I have two primary motivations. First, I'd like to have a better context for understanding new medical treatments that I hear about. Second, I'd like to know enough about basic human biology that my first response when someone tells me about a medical problem they're having is sympathy rather than a desire to learn what that part of the body does (my curiosity is usually followed by frustration when the person who is complaining of some organ failing doesn't even know what the organ is supposed to do when healthy and frustration is a particularly bad response). So, I guess my interest is more about understanding how anatomy relates to health. The link you provided looks good for learning basic biology, but I got most of that stuff in my basic biology class in college. I will check out the ones that look oriented towards human biology though. I was hoping for something shorter than a text book, although a good text book recommendation wouldn't go amiss either.
A quick Google threw this [http://teachmeanatomy.info/the-basics/book-reviews/top-5-textbooks/] up, though I can't attest to the merits of any of them. I'm tempted by the last one myself. (I am currently very slowly working my way through an undergrad human biology syllabus comprising four fairly weighty institution-specific textbooks. It's quite possible I'll never finish them.)
Gray's anatomy [http://www.amazon.com/s/ref=nb_sb_ss_c_0_14?url=search-alias%3Dstripbooks&field-keywords=gray's%20anatomy&sprefix=gray's+anatomy%2Caps%2C3699]. If you want to learn how the organ systems work, that's physiology. I liked Boron [http://www.amazon.com/s/ref=nb_sb_ss_c_0_13?url=search-alias%3Dstripbooks&field-keywords=boron%20physiology&sprefix=boron+physiol%2Caps%2C333] in med school, if you want a basic understanding just read the subtitles and figures. The basic stuff doesn't change drastically with time, and you can get older versions of the books cheaply.
Not exactly an overview but apparently good for learning location and name: Speed Anatomy App https://play.google.com/store/apps/details?id=com.speedAnatomy.speedAnatomyLite&hl=en [https://play.google.com/store/apps/details?id=com.speedAnatomy.speedAnatomyLite&hl=en]
Thanks. I'm actually already using that to learn the location and names. It's pretty useless for function, but when someone tells me what the duodenum is for I'll know where it is.
The part of biology that studies function is called physiology. Anatomy is the part about the structure.
DK Publishing has lots of books on this subject, all with the classic DK gorgeous visuals. They're geared to a public (not student / academic) audience, which may or may not be what you're looking for. The one I have is The Human Body Book [http://www.amazon.com/The-Human-Body-Book-DVD/dp/0756628652]. It also has sections on pathologies for each part of the body.

So ... I suspect someone might be doing that mass-downvote thing again. (To me, at least.)

Where do I go to inform a moderator so they can check?

You should send a message [http://lesswrong.com/r/discussion/lw/kza/meta_new_lw_moderator_viliam_bur/] to Viliam Bur [http://lesswrong.com/message/compose/?to=Viliam_Bur].
Thank you!
I'm amused that your comment is sitting at -1; seems like someone made an awkward move. Reverting to zero under policy of not punishing these requests, though. :P

Is it worthwhile to teach about "Logical Fallacies?"

When in high school, one of my English classes had a unit on logical fallacies. Everyone was given a list of "logical fallacies" like "appeal to authority" and "slippery slope." We had to do things like match examples with the names of the fallacies (which would almost always have multiple reasonable answers), and come up with examples of various fallacies.

At the time, I thought that this was a huge waste of time. My reasoning was that there were many more ways t... (read more)

I both disagree and agree with your high-school self. Learning to recognize common failure modes, and developing a common language for talking about them with each other, is a relatively cost-effective way to improve the average validity of my arguments, in much the same way that reducing infant mortality is a relatively cost-effective way to increase average lifespan. It doesn't do anything to improve the validity of my most valid arguments, though. Depending on how reliably I reach that maximum and how high that maximum is, that might be OK. Or it might not, and an entirely different approach (like teaching what valid arguments are) might work better. And the relationship between learning-to-X and teaching-X-in-high-school is of course a whole different problem. All of that said, I'm curious: how would you go about teaching what a valid argument is, to a degree that "don't trust anything else" is actually good advice to follow?
4Scott Garrabrant9y
I do not know. To be honest, my high school self had a strong tendency to overestimate the rationality and learning potential of the general population.
That's a very fair answer. Do you have any sense of how you learned it? For my own part, I feel like I learned what a valid argument is, to the extent that I have learned it, almost entirely by a series of negative examples. For that matter, I'm not sure I can articulate what a valid argument is in non-question-begging terms... though to be fair, I haven't sat down and tried for five minutes.
0Scott Garrabrant9y
I do not know how I learned how to argue, but I do not think it has anything to do with negative examples. For me, it seems similar to understanding what is a valid mathematical proof (one which in theory could be expanded to following the logical rules at each step) but where you are allowed to make observations and probabilistic reasoning, all of which came naturally to me. I do not feel like I ever had any inclination to use logical fallacies, and I feel like I am quick to recognize when arguments do not make sense. This is in contrast with cognitive biases. I feel like I am very dependent on parts of our brain that have biases, I will not be able to get past them easily, and can learn to mitigate them by being aware of them.
On the other hand, some fallacies recur much, much more often. Ad hominem and appeal to authority are de facto THE way humans argue with each other, with strawmanning as a very strong third. So at least learning how to spot and correct those can alone improve someone rationality. There are though much funnier ways to learn about fallacies, Biased Pandemic [http://lesswrong.com/lw/ar2/biased_pandemic/] being one of my favourite.
2Scott Garrabrant9y
Biased Pandemic is about learning about cognitive biases. Cognitive biases are different from logical fallacies.
Yes, in the sense that cognitive bias are a subset of logical fallacies systematically applied by our brain. So I maybe can refine my answer: is it worth to teach about logical fallacies? Yes, especially when they become cognitive bias. A fun way to do this is Biased Pandemics.
Many cognitive biases don't have much to do with logic as humans generally don't make decisions via logic.
It's worth learning about logical fallacies and internalizing them. It might not be worthwhile to teach people about them in school because people often don't remember & internalize what they're taught there. While it's important to be able to recognize & build a valid argument, it's still been useful for me to use knowledge of fallacies to set mental triggers which activate when I mentally reach for a fallacy. Instead of unreflectively using an appeal to authority (for example) as a cognitive short-cut without checking whether it actually works, the not-quite-conscious sensation of making the appeal gets flagged for conscious attention, and I realize, "Oh! I'm making an appeal to authority. Does that appeal actually have much evidential weight?" Edit: this isn't directly responsive to you, but I can also imagine LWers who've moved on to noticing newer [http://lesswrong.com/lw/e95/the_noncentral_fallacy_the_worst_argument_in_the/] fallacies [http://lesswrong.com/r/discussion/lw/kwk/overly_convenient_clusters_or_beware_sour_grapes/] finding it harder to understand why it's worth studying more well-known & canonical fallacies, even if the latter are as important.
There are "valley of bad rationality" effects here. Most so-called fallacies are in fact valuable heuristics. For instance, appeal to authority and slippery slope are both generally correct. Knowing about fallacies creates in people the illusion that they can tell good arguments from bad. It lets them refute, in a superficially intelligent way, whatever position they don't want to believe. Though I do think it's worth learning them anyway, for some people, at least so you know what people mean when they say "ad hominem".
I don't think that there evidence that it's very useful. Knowing the label "appeal to authority" doesn't tell you anything about when you want to believe in an authority when when you don't. The whole idea of teaching logical fallacies without providing the student any evidence that it's useful to teach logical fallacies also very ironic. In the real world scholarship is important for understanding a topic on a higher level.
Any professional educators able to comment on this? I had the opposite experience, I never learnt about logical fallacies in school and was shocked it was never taught when I found the formal definitions later in life.

Does muflax still have a blog or write somewhere? I remember reading some interesting things, but I can't find them now. My memory may be faulty.

It used to be at blog.muflax.com, but he appears to have taken it down. It wasn't on the Internet Archive last time I checked.
Gwern still links to some of muflax's writings, using his own backups. Googling something like "site:gwern.net muflax" turns up some results (though not many).

Is there a listing of Yvain/slatestarcodex's fiction? I just finished reading The Study of Anglophysics, and I want more.

I'm not aware of any complete compilations of his fiction in one place; that said, you can presumably find most of it by going through the "fiction" tag on SSC [http://slatestarcodex.com/tag/fiction/], the fiction section of his website [http://raikoth.net/fiction.html] and the fiction tag on his old LiveJournal [http://squid314.livejournal.com/tag/fiction].
this [http://gen.lib.rus.ec/scimag/index.php?s=10.1007%2Fs11569-012-0155-1] is slow but simple, while this [https://www.academia.edu/3353047/Visions_and_Ethics_in_Current_Discourse_on_Human_Enhancement] requires a login [https://bugmenot.com/view/academia.edu]
Google only gives two: https://www.axelarnbak.nl/wp-content/uploads/2014/01/Liebert-2010-Collingridge%E2%80%99s-dilemma-and-technoscience.pdf [https://www.axelarnbak.nl/wp-content/uploads/2014/01/Liebert-2010-Collingridge%E2%80%99s-dilemma-and-technoscience.pdf] http://www.philosophie.tu-darmstadt.de/media/institut_fuer_philosophie/diesunddas/nordmann/forensics.pdf [http://www.philosophie.tu-darmstadt.de/media/institut_fuer_philosophie/diesunddas/nordmann/forensics.pdf]

Rank the Greg Egan books from best to worst. I have read Permutation City, Quarantine, and Diaspora, loved them all, and am trying to decide which to read next.

No time to explain, but logged in to list Distress Axiomatic (story collection) Luminous (story collection) Permutation City Quarantine Diaspora Zendegi Teranesia Orthogonal trilogy Schild's Ladder Incandescence
I haven't read many. I fully recommend Diaspora and Luminous (short story collection), I was less sold on Permutation City.
This should be fun! Distress - it's like the kitchen sink of hard/near-future SF Quarantine - very enjoyable, but a bit simple-minded Incandescence - seems like a return to early Egan's minimalism Permutation City - cool, but rather off for me Schild's Ladder - doesn't feel innovative, the ending has the same vibe as that of Permutation Zendegi - was expecting more LW mockery after the discussions, unfortunately it was very limited Diaspora - although brilliant in some respects, very confusingly written Teranesia - just boring, I understand why it's not so known Haven't read yet An Unusual Angle; the Orthogonal trilogy I'll read when I get it whole.
Haven't read them all. Of the ones I've read: Axiomatic (best, collection of short stories) Permutation City Distress (contains 'sufficiently advanced' biotechnology that amazingly did not piss me off) Diaspora <Edge of what I'd wholeheartedly recommend> Schild's Ladder Incandescence Quarantine (I found it nearly unreadable) I suspect he's one of those authors that everyone agrees one of his books is great but nobody can agree on which one... EDIT: I find myself seriously considering the possibility that Diaspora is in the future of Permutation City...
Diaspora Distress Quarantine I haven't read the short story collections close enough together to rank them, but I'd generally recommend them Permutation City Zendegi Schild's Ladder Teranesia Didn't finish (more physics than I could appreciate) The Clockwork Rocket Incandescence
If you liked Diaspora, you'll like Schild's Ladder. Zendegi is more psychological and a different kind of fiction, thus not much of my favourite. Incandescence and Orthogonal should be grouped together; not bad, but different, and IMHO Orthogonal suffers from being a trilogy. They are also much more maths-heavy.

If you want a logical reason for the complete disinterest in longevity research shown by the powers that be, the most obvious, if you’re even a little paranoid, is that they already have the secret, and aren’t interested in distributing it to the hoi polloi. If so, members of the inner circle would obviously have to fake their own deaths every so often – otherwise they might face mobs of angry peasants bearing torches.

From Greg Cochran

He is joking or crazy, right?

Biological immortality is cancer-cure-complete. And cancer is very tough, it's a breakdown of multicellularity coordination. Conspiracy theory bug in brain seeing agency everywhere is much more likely.

Why do plants not get cancer btw?


They do, but to paraphrase Spock, "its cancer but not as we know it".



It's virally or bacterially or fungally induced much more of the time than in animals, and metastasis is basically a no-go in an organism that has zero internal cellular mobility due to cell walls, but it does happen.

I would also not be surprised if the fact that a lot of plant cells that are not at the growing tips of shoots are massively massively polyploid (we're talking 128n in a lot of mature leaf cells) and thus difficult to divide successfully makes it harder for issues to originate in mature plant tissue. Also in most plants there are pretty much only the equivalent of 'stem cells' at said growing tips while we tend to have them all over. The growing tips can get screwed up too, and when that happens you get fasciation (http://en.wikipedia.org/wiki/Fasciation) [EDIT: or witche's brooms (http://en.wikipedia.org/wiki/Witch%27s_broom)]

Amusingly, plant cancer can be quite valuable. The unusual grain patterns in large burls make them sought-after for specialty woodwork, and they're hard to grow deliberately, which has led to problems with poaching from protected forests.

Thanks, that is interesting!
http://33.media.tumblr.com/0e0b568b0d99ae96426e1c4ee3c54fc8/tumblr_nc97h81Sa51qk10pvo1_500.png [http://33.media.tumblr.com/0e0b568b0d99ae96426e1c4ee3c54fc8/tumblr_nc97h81Sa51qk10pvo1_500.png]
As far as I remember there are also a few animals (such as the naked mole rat) which do not get cancer or at least get it very very rarely.
But they do [http://scholar.google.com/scholar?q=%22plant+cancer%22&btnG=&hl=en&as_sdt=0%2C5].
It would be more plausible that they don't have true immortality, but they do have extended healthy lifespans and some better cancer treatments than we do.
He's joking. Look at his previous post on longevity [http://westhunt.wordpress.com/2013/12/09/aging/]. Cochrane's model of the "powers that be" seems to be that they are kind of dumb.
Dunno, but he's not very consistent in whatever it is. He says at the end of that paragraph: If the real powers that be are invisible, i.e. so powerful you'll never hear of them, they don't need to pretend to die. Presumably, the visible powers that be that he begins by talking about are taking their orders from the invisible PTB, who discovered the secret of immortality in their own secret laboratories. But this is Weekly World News territory.
Sure they would. If some random nobody was living forever without aging, they would still get noticed, as Joseph Curwen [https://en.wikipedia.org/wiki/The_Case_of_Charles_Dexter_Ward] discovered. It's funny to me that you can't recognize the sardonic quality of the post (e.g. reference to becoming a "sequoiah farmer." Being rational does not mean you must exhibit Spock-like literalism!
The PTB are not random nobodies, and Curwen is fictional. I can recognise many things in the post. I can imagine he's not serious, and recognise non-seriousness in the post. I can imagine he's lost it and means every word, and recognise that in the post. I can imagine it's an idiot test to judge his commenters by their responses, and recognise that in the post. I can read all of these things into the post as easily as each other, which means I don't know which, if any, is the true meaning. But there is that internal inconsistency about who and what he thinks the PTB are.
Seems like you missed the point. I mentioned the example of Joseph Curwen as an illustration, not as evidence. The basic point is that it does not matter how famous/obscure someone is, if they just stay young forever, people will notice. And the idea is that they prefer to remain unnoticed. So the obvious solution is to kill off the old identities every so often and start over in some other place under some new identity. So that makes it clear that there is no inconsistency in Cochrane's scenario. As long as they are immortal, they will have to keep switching identities or their anonymity is compromised. Maybe it's just because I'm familiar with his general attitude, but I think it is very clear he is joking. I pointed out in my other comment that he's talked about this before, and he makes it clear that he thinks that longevity research is underfunded because elites are ignorant of the possibilities. He's comically assuming the opposite: that elites ignore longevity research for the only rational reason: because they are already immortal. If it has to be explained it isn't as funny, I suppose.
Yes, time to clean up your RSS feed ;)
Remindes me of Lazarus Long [http://en.wikipedia.org/wiki/Lazarus_Long] resp. the Howard Families.
See, if people trying to bring in a new age where people can be brought in line to the legitimate views can just stop posting nonsense like this maybe we can get somewhere. Great, now one of the few legitimate blogs we had now look crazy and we would be delegitimized if we link to it.
I put higher weight on joking than crazy. He might be trying to build up plausible deniability for past/future claims that he really believes in but could get him fired.
I don't think that strategy is likely to succeed. Just look at the criticism that Eliezer got after his April first post. I think the most charitable reading is that he wants to run an experiment on his audience to see the effect of writing utter crap.
I sure hope that we get a post in a few days saying 'It seems I can't Social Text you guys.'
What happened? What is this post?
The post is about how he was born in a parallel world and then woke up in this world. It's full of statements intended to be plausible deniable and therefore complicated to read. Eliezer withdraw the post from LW.
This could explain the recent popularity of the vampire romance novels, and the new model of the vampires. In the past, the most reliable way to longevity was calorie restriction, mostly practiced by hermits. This was the old model of the vampire: an old person, fragile to weather changes (holy water, even daylight), wise and evil, but nonetheless easily defeated by a determined human attack. These days, I am not sure what exactly the longevity treatment is, but it seems to allow the person to remain young and attractive and strong and quick etc. Maybe not exactly as attractive and as strong as the Twilight novels describe, though; part of that impression is probably just a halo effect around a very-high-status person. Thus, the new kind of vampires. Ones who are obviously superior to average humans, and the only danger for them is their vampire competitors. Unless that is also a metaphor for the world oligarchy.
What do you mean, you're not sure? It's blood [http://www.nature.com/nm/journal/v20/n6/full/nm.3569.html]! X-D
I get the impression that vampires = superheroes for adolescent girls. I also find it interesting that from what I've heard (I haven't read the things), Bella Swan in the Twilight novels undergoes a kind of "reverse Arwen" transformation, shedding her humanity and mortality so that she can stay with her vampire husband forever.

I don't know how many people here have medical or nutritional expertise, but for those who do, I have a question.

The benefits and risks of multivitamins have been discussed a little in the media, but as a layperson I find it difficult to look at the conflicting studies online and come to any particular conclusion as to what I should do.

Specifically, I am looking at this as a person with a chronic illness who finds it difficult to feed myself a diet as healthy as I would like due to money and time/energy constraints. I am therefore looking at supplementing ... (read more)

The average person should get a blood panel if they want to know what's going on with their body, and that goes double for a person with a chronic illness. With a bit of effort you can probably figure out a combination of cheap foods and supplementation that you feel good about. WRT multivitamins specifically, the effect on your stress level dominates the negative impact on your health in all likelihood.
I'm not sure it's possible to just get a blood panel on the NHS. My instinct is that I'd need to actually show symptoms of a vitamin deficiency. Thanks anyway though.
A basic blood panel is more or less the first thing doctors try when faced with uncertain symptoms. Go to your MD and complain about general weakness, random pains, strange headaches, and occasional fainting spells :-)
They shouldn't be too expensive to have done privately. We were able to get a basic panel in the US for under $200.
Yeah I had a quick look and that's about right for the price over here - certainly not doable for me, anyway.
A basic blood panel is unlikely to tell you much about your micronutritional status if your diet is even remotely normal and you don't have problems with absorption i.e. gastrointestinal disease. The only vitamins they might sample would be vitamins D, B5 and perhaps B12 and folate. I don't think these are even part of any basic health check at least for younger people. However if there are gross deviations in a basic blood panel then nutrition is probably the least of your problems. Just thoughts of an MD from Finland, I'm not much of an expert on nutrition. I don't eat extremely healthily myself and based on this literature review [http://summaries.cochrane.org/CD007176/LIVER_antioxidant-supplements-for-prevention-of-mortality-in-healthy-participants-and-patients-with-various-diseases] I don't bother supplementing anything else than vitamin D, which I consume 50 ug a day. I wouldn't be scared of multivitamins either, while relative risks might look scary, the absolute risks involved are miniscule.
Thanks for the link and advice; I was basically looking for a review like that but lacking the studies-savvy to find it.
As far as understand the situation is as follows: If you are a vegetarian, then you have to supplement B12. There inconclusive evidence on whether supplementing Vitamin D3 is helpful. Human's don't have Vitamin D2 naturally, so taking it instead of D3 is a bad idea. I personally think that supplementing it helps for people who are mostly indoors and don't get their needs meet by synthesizing it while being outdoors. At the moment there a randomized control trial underway that will gives us more information in a few years. There no evidence that the average person benefits on an average day from taking multivitamins. Any decision to take them should be made on an individual basis. If you don't have a well trained sense of listening to your body that means going to the doctor and getting blood test. If you do have a chronic illness you should discuss questions like this with the doctor responsible for treating you for that chronic illness. He might know specific things that apply to you based on your illness. What you can do for your health is getting 3 times per week 30 minutes of exercise that get's your pulse up. There are also various things with can be experimented with to see whether they reduce the symptoms of your chronic illness.

Can anyone suggest me a sane introductory book on the game theory?

LW regular James Miller has written a book on Game Theory: http://www.amazon.com/gp/product/0071400206?sa-no-redirect=1&pldnSite=1 [http://www.amazon.com/gp/product/0071400206?sa-no-redirect=1&pldnSite=1]

I want to build a facebook app but I know very little about where to start (and the fb developer page FAQ is pitched to a way higher level than I am -- I need to know where I edit code, tutorials on using the API, etc). Any links to helpful resources/walkthroughs for a beginner?

What is your programming background?
I know some python (took Udacity classes -- Building a Search Engine, Into AI, etc). Can solve some project Euler problems.
If patient, start by doing something easier. Programs which don't talk to complex services are vastly simpler to write than programs which do. Typically, CS classes will build up with an escalating series of toy problems; there are good reasons for this. Additionally, Project Euler problems are more dependent on mathematical problem solving than high-level understanding of programming. If impatient... well, you might be able to find cheat sheets on Stack Overflow, but quite frankly I'd give you a <20% chance of success until you can understand FB's developer page.
I've done some toy problems (it's harder to be motivated when they're not leading up to something I want), but I don't know the sequence of toy problems that would help me get up to understanding FB's dev page. I know my current set (Project Euler) isn't moving me closer. Where would you recommend looking for better toy problems to build the relevant skills?
The thing is, it's not like you need to learn a programming language better, what you need to learn is Facebook's API and the libraries involved. That's a separate narrow domain of knowledge -- improving your Python skills will not advance you here. Maybe start with a "Hello, world!" program inside Facebook, then incrementally add things to it? I am sure there is example code available -- take the simplest/smallest program, understand how and why it works, try modifying it...
I just had a glance at their API docs. At a first glance, it seems reasonably well-factored, and I have little doubt that I could learn to use it in a matter of days, with "hello, world" in hours. There isn't anything terribly unusual there. That's the starting point from which I'd give good odds at writing a facebook app. Unfortunately, it isn't palladias' starting point; unless I'm completely mistaken, he's several levels of inference away from understanding their API docs. Attempting to spike, by learning just what is needed to understand Facebook's APIs, is likely to produce a fragile understanding that breaks the moment they change anything. Ideally, you'd want a broad enough base of understanding that you can predict where to look for bits of API because it's where you would put them yourself.
As it happens, palladias is female.
So noted, though I don't see the relevance.
Just that you said "he".
Hmm. Well, I don't have much of a gender identity, so I don't know how annoying being addressed like that would be. On the other hand, English doesn't have any gender-neutral pronouns that don't make me feel silly, and I refuse to make one up when I'm not writing fiction.
Well, obviously it's up to you. My own preference is to do one of * determine the gender of the person you're referring to * use a gender-neutral pronoun * restructure the sentence so as not to need a gendered pronoun * use a construction like "he or she" in preference to possibly misgendering someone, since I know some people find that very unpleasant. But if those are all too much trouble, fair enough.
Universities. There are a large variety of BSc/MSc-grade courses online, some of which are actually good, and of course 'offline' is also an option. Unfortunately I don't know what the best ones would be now, but MIT is probably a good starting point. It won't get you everything you need, though. See my response to Lumifer.
Unless you have a specific reason that you want to use Facebook, I would recommend not using Facebook to build your app. If you use Facebook, you will be locked in to their platform, subject to their whims, and part of a system that does not respect the privacy of the users, or respect users in general. Contributing to an open source project, or building your own open source application, would be a better way to improve and showcase your skills, in my opinion. It would also have a more positive impact on the world. That said, Facebook's Developer page's Quickstart Guide [https://developers.facebook.com/docs/graph-api/quickstart/v2.1] says that Facebook's Graph API is based on HTTP. So, I would recommend learning about how HTTP works, getting a better understanding of it, then downloading Facebook's Graph API Explorer tool, and fooling around with that, as a place to begin. There are probably many ways to learn about HTTP, but one way you might consider is the O'Reilly book on the subject: HTTP: The Definitive Guide by David Gourley and Brian Totty. I haven't read it, but I have found other O'Reilly books to be helpful with more clarity in the writing than in many other technical books.
My reasoning for using fb is that my plan involves fb (i.e. I'm picking a random friend of the person using the app and then [a slightly more involved version of gratitude journaling]). And that I figured that I wanted the app to occur within something the person was already using, to avoid the inconvenience of downloading a standalone app or visiting a website.
The facebook API is probably the friendliest and best-documented "web API" out there (which is not to say it's good, and they're terrible about changing it all the time - but compared to their competitors...). Their docs have examples, and also the interactive "graph explorer" is a very good place to play around to start with. You edit code the way you normally do, and your code makes web requests to their APIs. If you're not comfortable with that, maybe some tutorials on e.g. web scraping (that's a popular thing to do in Python) would help you get more comfortable with doing HTTP in Python? Or from the other edge, try making the necessary HTTP ("REST") requests to facebook "by hand" using something like Postman, and then writing code to do that. As others have said, start with a "hello world" and then add features to it.


Discussion of the Open Thread goes here.

I think the easiest way to implement this is to replace Open Thread with a third subreddit.

Is there any way to promote open thread comments and the comments in answer to them to Discussion? If there isn't any way, should there be?

No there isn't. Ideally there would be but it probably takes development resources that we don't have.
Short of copying and pasting it, AFAIK there isn't. Probably. I mean, I'd like it if it happened, but I'm not holding my breath.
I'm interested in trying to figure out how to do this, but am curious about who would have promote authority.
One approach would that everyone (or perhaps everyone above a moderate karma level) has the authority to promote-- after all, anyone can post anything to Discussion. The advantage of changing things is that a promoted open thread comment would bring its comments with it. If you want a restriction, it might be that the original comment and/or the original comment + it's comments have some specified karma total.

Question: Is there such a thing as mathematical ethics? I mean rigorous modelling of moral choices based on mathematical objects (lets call them virtue functions) and derivation of qualitative and/or quantitative properties of these objects based on standard math tools like derivates, order theory, statistics or whatever.

I'm asking because yesterday I had an interesting discussion about ethics which involed modelling subjective value judgements as a function. I'd like to relate this to possible existing work.

I did found these links:

... (read more)
Like a utility function?
Wouldn't this be more or less the same thing as decision theory, just applied to one particular sort of set of preferences?
As always, mathematics has a more abstract version of the model you're looking for.

A couple of my friends mentioned it, without being able to pinpoint what it was, so I wasn't asking about it too much. However, it's denial cannot be existed if it's on the map of the rationalist community. What the heck is 'post-rationality'?

Darcey Riley (lucidian [http://lesswrong.com/user/lucidian/overview/]) has written the introductory post [http://yearlycider.wordpress.com/2014/09/19/postrationality-table-of-contents/] to a planned series of "what is postrationality?" posts. Edit (2017-01-22): Metarationality Repository [http://lesswrong.com/lw/oi6/metarationality_repository/]

Where Emigrate?

I'm looking at places to relocate to from the United States south. My plan is to search for job opportunities in ideal locations until I find one that I can obtain, then obtain it. At the moment, I'm composing a list of candidate locations (both in and out of the U.S.) that meet my criteria, but I'd also like an outside opinion as well.

What are some ideal locations to move to for permanent residency?

Something like http://en.wikipedia.org/wiki/Mercer_Quality_of_Living_Survey [http://en.wikipedia.org/wiki/Mercer_Quality_of_Living_Survey] ? (though if you're like me then filtered by local language). Of course that assumes you want to live in a city.
That is indeed a handy resource. Thank you for the link.
What are your criteria? And what places meet them so far?
I avoided listing my criteria because I wanted an unfiltered idea of what people here consider a "good place to live." Since you asked, though, I'll tell you my areas so far: In the United States, I am looking for cities and areas that show some forward thinking and planning (generally determined by looking at their environmental measures and infrastructure as well as their economic history), a healthy population (generally aligned with a good healthcare system), and an active intelligence community (libraries are my field so, aside from personal enjoyment of intelligent groups to associate with, this helps in securing a job). Some cities on my list: Austin, Texas; Boulder, Colarado; the Bay area of California; the Research Triangle of North Carolina; the northern area of Vermont and New Hampshire (more for environment and well being); and the Seattle area of Washington. Outside of the U.S. I am looking at countries that display much the same qualities. A strong economy and infrastructure, an active intelligent division, a reliable healthcare system. Countries of interest so far are: Luxembourg, Switzerland, France, the United Kingdoms. I'm also looking at Canada, especially for its proximity to the U.S. and the main providers of cyronics should that become a factor in my decision making.
The San Francisco Bay Area is okay at environmental policy, but it's astonishingly bad at infrastructure. People move here for jobs and human capital, not because it's a well-managed urban area; it really, really isn't.
Hmm, that I was not aware of. I'll have to look closer at their infrastructure and see how bad they are and how that changes my opinion. Thanks for the info.
In Berlin you will get much farther via English than in France. Berlin has relatively cheap rent for being a major city.
Which languages do you speak? While it is possible to function comfortably speaking only American in the UK, Luxembourg and much of Switzerland, France is a no-go.
English. Part of my build up to a potential move (should I decide on another country) is to learn a second language. Probably French if I'm planning to move to the continent.
That doesn't sound like southward at all.

I was pondering the whole mass-downvote kerfuffle a while back, and even though I generally agree with the end result from gut instinct reasoning, I'm struck by the following:

The downvoter had an objective, and rationally used the tool of downvoting to achieve it rather than constraining himself arbitrarily. If HPJEV were a forum-dweller instead of a wizard, he would do the very same.

I also have an objective. My objective is this: At least somewhere on the internet, there should exist a community where people can have real discussion, ie, a dispassionate exchange of priors, likelihood ratios and arguments. It will not be possible for me to achieve my objective if participants turn discussions into wars. It will also not work if people with certain views feel unwelcome, or scared to vocalize their views.

Yes, he may have been acting rationally, in the same way that somebody who defects in Prisoner's Dilemma acts rationally. In fact, it would be rational for anyone to use unacceptable tactics in order for their side to "win" the discussion. However, the continued existence of Less Wrong as a rationalist community depends on people cooperating in this game. Moloch will certainly kill the rationalist spirit if we don't punish defectors.

Sometimes it is rational to punish defectors even if the defectors themselves are acting rationally. I do however understand that this is a difficult trade-off, as we have seen strong evidence that there are people who are willing to participate and have high-quality insights that are not easily obtained elsewhere, but who refuse to play by the rules.

It is not at all clear that Eugine achieved his objective. One thing he certainly achieved was to get kicked ignominiously out of the Less Wrong community, which I'm guessing wasn't an objective. (though I have seen speculation that he has returned under a different name).

Which doesn't mean that the new account doesn't also get deleted and we will be more careful the next time around. It's just that processes on LW take time.
For the avoidance of doubt, I agree and wasn't at all intending to suggest that if Eugine is back then kicking him out didn't accomplish anything, or that if he's back and behaving in the same way that got him thrown out the first time it's in any way worrying that he hasn't been re-expelled yet. (I do think that if it turns out he's back and doing the same things that got him thrown out before, the moderators should dial up the disincentives this time around. Block LW access from his IP address, reverse every vote he ever cast, that sort of thing.)
Back in days where I was moderating a forum, rebanning a reregistered person might have taken a week and not months. Given the amount of time I spent on LW I wouldn't have expected that it takes me that long but then I'm not thinking anymore with the moderator hat. That doesn't accomplish anything given the availability of proxies. Expect maybe being an insult in his perceived lack of IT knowledge. That needs someone to write the necessary code.
I'm all for insulting people who abuse the system :-). And for putting trivial inconveniences in their way, if nontrivial ones are too difficult. And presumably, indeed, reversing all someone's votes isn't currently supported by whatever LW admin tools there are, and implementing that feature would be a non-negligible cost to weigh against the ability to disincentivize abuse in that way. (For the avoidance of ambiguity: I meant "undo", not "reverse the sign of". It would be quite amusing to reverse the sign of all someone's votes as a punishment for abuse, but I can think of more than one reason why it probably wouldn't be a good idea.)
People switching IP addresses makes it harder to track that they are reregistering, so it's usually no smart move to force this behavior.

Rationality is about going down winning road and accurately predicting the consequences of your actions. Punishment creates deterrence for rational actors.

Eugine very likely made wrong predictions over the results of his actions.

http://slatestarcodex.com/2014/02/23/in-favor-of-niceness-community-and-civilization/ [http://slatestarcodex.com/2014/02/23/in-favor-of-niceness-community-and-civilization/]
Well no, because I doubt he'd share the downvoter's objective. (I assume. I wasn't following the kerfuffle.) To conclude that he would, you have to transplant his methods onto a forum setting but not his goals. Which is a weird level to model at.
Given the strong ethical view that HPJEV takes of lying, that would be grossly against his character. He might also say, as would I, that it's a short step from mass downvoting to what Yvain reports here [http://slatestarcodex.com/2014/09/26/i-am-being-framed/].
I see a qualitative difference between mass downvoting and malicious impersonation--- malicious impersonation is a much stronger effort to damage reputation. Mass downvoting is a way of saying "this person is disliked", while malicious impersonation is supplying false evidence that the person is detestable.
Well, HPJEV's ethics are wildly inconsistent moment-to-moment, so...
I think the clearest example is in his attitude towards death-eaters and bullies. He hears Draco talk about raping people, and decides to befriend him. He sees people being bullies at school, and is ready to seriously injure them. He hears about his own parents being bullies, and decides they must have been terrible. He says that death-eaters have made their lives forfeit (I think Amycus Carrow or somebody has threatened Hermione's life or something, and Harry is telling Dumbledore how he wants to go and murder him). He pals around with Quirrell even though he is OBVIOUSLY EVIL (okay, this really isn't about Harry's ethics, but it's clearly an issue of the story). He's talks a big game about scouring evil people from the earth, but is revolted to his very core by Azkaban (and releases a murderer who had been found guilty; despite the tenuous reasoning he's given for why she wasn't truly guilty, from a man who is OBVIOUSLY EVIL, it still seems like a low-marginal-value thing to spend his time on, especially considering he simultaneously ignores the plight of every other prisoner). He threatens the wizengamot over their verdict, even though the evidence is pretty damning against Hermione, honestly. Essentially, what I'm trying to say is that Harry is in one second ready to kill anyone who is moderately indecent to the point of bullying another, or not immediately respecting that he is the most ingenious-and-capable 11-yo in the world, and in the next second is the most generous, rehabilitation-focused humanist possible, and in the next will rationalize pretty sketchyl bargains with Quirrel or Lucius Malfoy.
Like an amoral capitalist, he revealed flaws in our moderation system that really ought to be addressed in a less ad-hoc way.
Exactly what I'm saying. Don't know who would downvote you for that!
I didn't downvote, but I can see downvoting for bringing in a larger political issue. See also: Like an amoral politician, he revealed flaws in our moderation system that really ought to be addressed in a less ad-hoc way.