Open Thread, June 1-15, 2012

1 min read1st Jun 2012260 comments

3

Open Threads
Personal Blog

If it's worth saying, but not worth its own post, even in Discussion, it goes here.

258 comments, sorted by Highlighting new comments since
New Comment
Some comments are truncated due to high volume. Change truncation settings

I've been getting the feeling lately that LW-main is an academic ghetto where you have to be sophisticated and use citations and stuff. My model of what it should be is more like a blog with content that is interesting and educational, but shorter and easier to understand. These big 50-page "ebooks" that seem to get a lot of upvotes aren't obviously better to me than shorter and more to-the-point posts with the same titles would be.

Are we suffering sophistication inflation?

I feel like the mechanism probably goes something like:

1. People are generally pretty risk-averse when it comes to putting themselves out in public in that way, even when the only risk is "Oh no my post got downvoted"
2. Consequently, I'm only likely to post something to main if I personally believe that it exceeds the average quality threshold.

An appropriate umeshism might be "If you've never gotten a post moved to Discussion, you're being too much of a perfectionist."

The problem, of course, is that there are very few things we can do to reverse the trend towards higher and higher post sophistication, since it's not an explicit threshold set by anyone but simply a runaway escalation.

One possible "patch" which comes to mind would be to set it up so that sufficiently high-scoring Discussion posts automatically get moved to Main, although I have no idea how technically complicated that is. I don't even think the bar would have to be that high. Picking an arbitrary "nothing up my sleeve" number of 10, at the moment the posts above 10 points on the first page of Discussion are:

• Low Hanging Fruit -- Basic Bedroom Decorating
• Only say 'rational' when
... (read more)
4maia9yI'd be concerned about posts like "the rational rationalist's guide" being moved to main. It's an amusing post, but I really don't think it meets the standards I would want for the main blog. And it is quite highly upvoted. I think this shows that just going by upvotes may be insufficient.
0wgd9yI'm not particularly attached to that metric, it was mostly just an example of "here's a probably-cheap hack which could help remedy the problem". On the other hand, I'm not convinced that one post means that a "Automatically promote after a score of 10" policy wouldn't improve the overall state of affairs, even if that particular post is a net negative.
2vi21maobk9vp9yWell, if the general idea that Main-blog posts are a good read per se, even without reading comments or any Discussion threads, I'd say that of 13 posts in the list, there are: Information-coveying medium-length posts liked by community Very relevant in Discussion, out of context in Main Discussion of low-level strategy. Will be useful for general audience after we know how it turned out, maybe; currently it is a status update that is shown to those who are interested in the inner workings of community. Questions, not blog posts. Between question and strategy discussion An interesting external link A set of external links All in all, I would say that 3 of 13 clearly match my perception of idea of "Main" and 2 more match my perception of supposed reading pattern of "Main". For majority of posts, their moving to Main means somewhat redefining Main. I don't have an opinion if it is a good or a bad idea (I read both for fun and don't believe in LW core values), but I do think that majority of high-voted Discussion posts cited had a respectable reason to be in Discussion.
2beoShaffer9yCan you give a specific example of recent posts that you object to, because that is not the impression I've been getting.
4lsparrish9yI don't mean that I object to it completely, but the type that seem a bit overrepresented are like these: * http://lesswrong.com/lw/cej/general_purpose_intelligence_arguing_the/ [http://lesswrong.com/lw/cej/general_purpose_intelligence_arguing_the/] * http://lesswrong.com/lw/b7w/decision_theories_a_semiformal_analysis_part_iii/ [http://lesswrong.com/lw/b7w/decision_theories_a_semiformal_analysis_part_iii/] * http://lesswrong.com/lw/bzy/crowdsourcing_the_availability_heuristic/ [http://lesswrong.com/lw/bzy/crowdsourcing_the_availability_heuristic/] I did recently have an article of my own promoted which is less sophisticated, so I'm not complaining. I'm just wondering if people might be choosing to hold back out of fear that their less academic writing style is not good enough for main, and/or dressing it up more than need be.
0beoShaffer9yI see what you're saying with the last one, but I don't think the middle one was over presented. The first one is a tricky case as the author was explicitly workshopping an academic piece in the making, and had good reason to use LW for that purpose.
[-][anonymous]8y 15

Moldbug on Cancer (and medicine in general)

I'm going to be a heretic and argue that the problem with cancer research is institutional, not biological. The biological problem is clearly very hard, but the institutional problem is impossible.

You might or might not be familiar with the term "OODA loop," originally developed by fighter pilots:

http://en.wikipedia.org/wiki/OODA_loop

If the war on cancer was a dogfight, you'd need an order from the President every time you wanted to adjust your ailerons. Your OODA loop is 10-20 years long. If you're in an F-16 with Sidewinder missiles, and I'm in a Wright Flyer with a Colt .45, I'm still going to kill you under these conditions. Cancer is not (usually) a Wright Flyer with a Colt .45.

Lots of programmers are reading this. Here's an example of what life as a programmer would be like if you had to work with a 10-year OODA loop. You write an OS, complete with documentation and test suites, on paper. 10 years later, the code is finally typed in and you see if the test suites run. If bug - your OS failed! Restart the loop. I think it's pretty obvious that given these institutional constraints, we'd still be running CP/M. Oncology is s

... (read more)
4othercriteria8yFollowing JoshuaZ, I also don't think this remark should stand unchallenged. Off-target effects, which are difficult to predict [http://en.wikipedia.org/wiki/COX-2_inhibitor] even for a single, well-understood drug. Also, the CYPs [http://en.wikipedia.org/wiki/Cytochrome_P450] in your liver can turn a safe drug into a much scarier metabolite [http://en.wikipedia.org/wiki/Paracetamol_toxicity]. And the drugs themselves can also modify the activity of the CYPs. Combined with dynamic dosing ("keep doubling the dose until the patient feels side effects") the blood levels of the 30 drugs will be all over the place. What are the common elements present when the patient has been dosed with varying amounts of 30 different drugs? If the cancer is cured, how should the credit be split among the teams? If the patient dies, who gets the blame? The anti-quackery property of the current research regime is not just to prevent patients from being hoodwinked. It's epistemic hygiene to keep the researchers from fooling themselves into thinking they're helping when they're really doing nothing or causing active harm.
0[anonymous]8yI was talking about the bolded part (though I happen to approve of the text that follows it too) when I said that he is of course right. Our dealings with medicine seem tainted by an irrational risk aversion.
1othercriteria8yFair enough. Only my last point sort of engages with the bolded text. I think there are much sounder ways to buy fewer undertreatment deaths from those extra sigmas of confidence than the plan that Moldbug proposes.
3witzvo8yThe ethical principles of experimenting on human beings are pretty subtle. It's not just about protecting from quackery, though he is right that there is a legacy of Nuremburg [http://en.wikipedia.org/wiki/Nuremberg_Code] involved. Read, for example, the guides [http://www.hhs.gov/ohrp/archive/irb/irb_introduction.htm] that the Institutional Review Boards that approve scientific research must follow. *Respect for persons involves a recognition of the personal dignity and autonomy of individuals, and special protection of those persons with diminished autonomy. *Beneficence entails an obligation to protect persons from harm by maximizing anticipated benefits and minimizing possible risks of harm. *Justice requires that the benefits and burdens of research be distributed fairly. The most relevant principle here is "beneficience". Unless the experimenter can claim to be in equipoise [http://en.wikipedia.org/wiki/Clinical_equipoise], about which of two procedures will be more beneficial, they're obligated to use the presumed better option (which means no randomization). You can get away with more in pursuit of practice than you can in pursuit of research, but practice is deliberately restricted to prevent obtaining generalizable knowledge. Roughly put, society has decided that it would rather that the only experiments that we perform are ones where there's no appreciable possibility of harm to the participants, than allow that it is sometimes necessary for the progress of science that noble volunteers try things which we can't be sure are good, and might be expected to be a bit worse, so that society can learn when they turn out to be better, or when they teach us things that suggest the better option. In a more rational society, everyone would have to accept that their treatment might not be the best possible for them (according to our current state of ignorance), but would require that the treatment be designed in order to lead to generalizable knowledge for the
3shminux8yI'm wondering if the comparison with a dogfight is fair, though. With only the conservative treatment Steve has months or years to live, while a single wrong move kills him quickly. Dogfights are the opposite: the conservative approach (absence of a single right move) has a significant chance of doing you in. In other words, the expected lifetime is reversed in a fight vs treatment between doing something and doing nothing.
3drethelin8yA doctor in australia has wanted to use the product our company makes to try in vivo treatment of cancer, and we are unable to let him because of how insanely liable we would be and the high cost of having a GMP facility (that would in no actual way improve the product) means it's unlikely to ever be a thing.
2Athrelon8yAs a medical student, this quote has significantly perturbed how I think about the epistemology of the field, though I'm still processing it. Well done!
2Multiheaded8yShocking! Why, who'd expect it from such a pillar of society! (Sure, he's 110% right in this isolated argument, and the medical industry is indeed a blatant, painfully obvious mafia. But one could make a bit of a case against this by arguing disproportionately risky outliers: e.g. what if we try to make AIDS attack itself but instead make it air- and waterborne, and then it slips away from the lab? What if we protect the AI industry from intrusive regulation early on when it's still safe, then suddenly it's an arms race of several UFAI projects, each hoping to be a little less bad than the others?)
7[anonymous]8yimagines US congress trying to legislate friendliness or regulate AI safety ಠ_ಠ
4Multiheaded8yWeirder, absurder stuff has happened - and certainly has been speculated about. In fact, Stanislaw Lem has written several novels and stories that largely depict Western bureaucracy trying to cope with AIs and other emerging technologies, and the insane disreality produced by that (His Master's Voice, Peace on Earth, Golem XIV and the unashamed Dick ripoff... er, homage Memoirs Found in a Bathtub). I've read the first three, they're great. For that matter, check out Dick's short stories and essays too.
2[anonymous]8yI know! I mean if something like this happens who know what might be next. People might start finding reasons why death is good or Robin Hanson might turn into a cynic.
2steven04618yI see he's commented about LessWrong [http://news.ycombinator.com/item?id=4024191], also.
5JoshuaZ8yFrankly, that bit came across as more or less projection. Although he is marginally correct that there's does seem to on occasion be an unhealthy attitude here that we're the only smart people.
2faul_sname8yOn occasion? (note that the "People in my group are smarter/better/otherwise better" idea is not at all unique to LW)
5gwern8yFor Moldbug to say that we're merely colossally ignorant seems like a serious compliment to us.
1TimS8yHave you read Eugene Volokh's writings on the right to medical self-defense?
0JoshuaZ8yHe's not right. He's marginally correct: First, he ignores that even under current circumstances a lot of people die from quackry (and in fact, the example he uses of Steve Jobs is arguably an example since he used various ineffective alternative medicines until it was too late). Moreover, cancer mortality rates are declining, so the system isn't as ineffective as he makes it out to be. His basic thrust may be valid- there's no question that the FDA has become more bureaucratic, and that some laws and regulations are preventing research that might otherwise go ahead. But he is massively overstating the strength of his case.
9[anonymous]8ySteve Jobs sought out quackery. You seem to be confused by what is meant by quackery here: People who die because they rely on alternative medicine aren't going to be helped in the slightest by an additional six or five or four sigmas of certainty within the walled garden of our medical regulatory system. Medical malpractice and incompetence also is not the correct meaning of "death by quackery" in the above text. Death by quackery quite clearly refers to death caused as deaths caused by experimental treatments figuring out what the hell is happening. You indeed miss a far better reason to criticize Moldbug here. A good reason for Moldbug being wrong is that even with those expensive six sigmas of certainty many people end up distrusting established medicine enough to seek alternatives. If you reduce the sigmas of certainty, more people will wander into the wild weeds outside the garden. These people seem more likely to be hurt than not. Not only that, even controlling for these people, the six sigma's of certainty might also be buying us placebo for the masses. But this is easy to overestimate, since it is easy to forget how very ignorant people really are. They accept the "doctor's orders" and trust them with their lives not because the doctor is right or extremely likely to be right but because he is high status and it is expected of people to follow "doctor's orders". The reasons doctors are high status in our society has little to do with them being good at what they do. Doctors have been respected in the West for a long time and not so ancient is a time when it is plausible to argue that they killed more people than they saved. The truth of that last questions matters far less than the fact that it can be raised at all! Leaving aside doctors in particular it seems a near human universal that healers or at least one class of healers is high status regardless of their efficacy.
2DanArmak8yNevertheless, today I believe doctors save many more than they kill. I want doctors to treat me, and I want them to become much better at treating me. And if there's no better choice, I will cheerfully pay the price of more people turning to quackery, because I won't do it myself.

My brain came up with this thought:

All else being equal, a murder is better than an accidental death, because a murder at least satisfies someone's preferences.

I was very tempted to take this as a reductio ad absurdum of consequentialism, to find all the posts where I advocated consequentialism and edit them, saying I'm not a consequentialist anymore, and to rethink my entire object-level ethics from the ground up.

And then my brain came up with other thoughts that defeated the reductio and I'm just as consequentialist as before.

For some reason, this was all very scary to me. This is the third data point now in examples of, "Grognor's opinion being changed by arguments way too easily". I think I'm gullible.

Three things: 1) I'm curious if other consequentialists will find the same knockdown for the reductio that I did; 2) Should I increase my belief in consequentialism since it just passed a strenuous test, decrease it because it just barely survived a bout with a crippling illness, or leave it the same because they cancel out or some other reason? 3) I can't seem to figure out when not to change my mind in response to reasonable-looking arguments. Help

Maybe you need to pay more attention to the ceteris paribus. When you include that, it seems perfectly sensible to me.

Consider a world in which in 1945 Adolf Hitler will either choke to death on a piece of spaghetti or will be poisoned by a survivor of the death camps that bribed his way into Hitler's bunker...

2Multiheaded8yPop psych states that murder, especially first-time murder, induces lifelong psychological trauma in neurotypical adult people - and that, therefore, most of them lose more (I'm not saying "more utility") than they gain. Clearly, that wouldn't be the case with the death camp survivor [1], but I can see a sane, relatively untraumatized civillian who'd volunteer for Hitler's post-war execution regretting their loss of innocence afterwards. [1] I've heard that this was what happened with the commandant of Dachau and some of the SS guards there, who were turned over to the liberated prisoners by American soldiers, and presumably torn apart by them. http://en.wikipedia.org/wiki/Dachau_massacre [http://en.wikipedia.org/wiki/Dachau_massacre]
1[anonymous]8yNot being a murderer, regardless of the utilitarian pay-offs is an important part of people's identify and reasoning about morality. It is fascinating just how damn crushing such tags can get once you start multiplying them. Even in the eyes of other people. Consider someone who has killed for the greater good. Now consider someone who has killed, raped and pillaged for the greater good (by historical standards this is the regular war hero pack). Now consider someone who has killed, raped, blackmailed and tortured for the greater good. One may be glad that such people do exist and that they are on ones "side". But wouldn't you feel uneasy around such a person? Especially if you couldn't abstract away their acts but had to watch say videos of them being performed. Imagine carrying those memories, what is your self-conception? The only tale of virtue you have left when you are alone at night is that you posses the virtue of being the kind of person who is capable of suspending moral inhibitions based on long chains of reasoning. Maybe you are just that good at reasoning. Maybe.
1wedrifid8yThe parenthetical is true but the raping and (for most part) the pillaging was for personal gain, not the public good. It takes much more effort to contrive scenarios with folks who "rape for the public good".
1[anonymous]8yNo more than torture for the public good, since rape can be used as a form of torture. It also has been used as a form of psychological warfare. Also pillaging can be vital to easing logistic difficulties of your side.
0Multiheaded8yIndeed, if the good guys are murdering whom they want and extorting stuff from the populace, it's called a resistance movement, and in a generation there's hardly anyone who thinks ill of them. See Russia, Spain, China etc.
0TheOtherDave8yBy implying by omission that the killing was not mostly for personal gain, do you mean to suggest that it was for the public good, or to invoke a non-excluded middle?
1wedrifid8yI make no claim about the killing - that is at least arguable and inclusion would distract from the main point that the raping in the example given (historic war bands) was not.
1Multiheaded8yAlso - and primarily - this. Damn, of course I considered those aspects too, I'm not so psychologically blind as to not understand them. I just was too lazy to hammer that idea into shape for my comment. So you deserve all the karma for it. Um, sorry, I'm just frustrated by how often I neglect to mention some facet of an argument due to it being difficult to communicate through written word [http://orwell.ru/library/articles/words/english/e_words].
0[anonymous]8yI was merely elaborating on an argument that I thought was already there but deserved some more attention. Particularly in this line:
2Multiheaded8yYep, but you see, there's a difference between "mere" emotional anguish, which is, after all, biologically constrained in a way, and identity-related problems (which, as I understand it, can ruin a person precisely by using their "normal" state, with conscious voiced thoughts, as a carrier). It's mostly bad to feel bad about yourself, but to know bad about yourself - seemingly in the empirical sense, just like you know there's a monitor in front of you - is even worse. Not everyone would see this in my phrase; I should've elaborated.
1Multiheaded8yBTW, I've sent you a few PMs with interesting (IMO) questions over the last few weeks/months, and none have been answered! I don't wish to embarrass you, I'm just curious if they might've been simply eaten by the mail gremlins. :) Might I just copy them and re-send them, so you could share an opinion or two at your leisure?
0[anonymous]8yOh don't worry I'm going to respond to all of those in order, if you remember I did send you a PM explaining that I was going to respond to them eventually (that dreadful word). Quite honestly though I really dislike LW's PM system, for starters my inbox contains both PMs and regular public responses and they get kind of lost in the mail, so there is that trivial barrier to responding. I think I've already mentioned that I'd like to move our correspondences to email, so if you wouldn't mind sending the text of your previous unanswered PMs in that format or me sending you an email quoting them I would much prefer that mode of communication. I'm also very much open to communicating live via skype or other IM programs. Though obviously we'll probably PM such contact data instead of disclosing it publicly.
0Multiheaded8yKthx.
4wedrifid8yIt can't be a reductio ad absurdium of consequentialism because the quoted claim isn't even implied by consequentialism. It is implied by some forms of utilitarianism. Consequentialism cares (directly) only about one set of preferences and the fact that the murderer has a preference for successfully murdering doesn't get a positive weighting unless the specific utility function arbitrarily happens to do so. It is just as easy to have a consequentialist utility function that prefers the accident to the murder as the reverse.
3Oscar_Cunningham9ySee this quote [http://lesswrong.com/lw/9pk/rationality_quotes_february_2012/5t7n]. Presumably you already had strong arguments in favour of consequentialism. So when you came across a knock-down counterexample your first reaction should have been confusion. When you encounter a convincing argument against a position you hold strongly, bring to mind the arguments that first convinced you of that position and try to bring the opposing arguments into direct conflict with them. It should then be clear that one of the arguments has a logical flaw in it. Find out which one.
2Ezekiel9yThat seems like it could easily slip into rehearsing the evidence, which can be disastrous. Watch out for that.
0Oscar_Cunningham9yYes, I only felt okay about recommending it because Grognor was complaining of exactly the opposite problem.
2Ezekiel9y3) I allot a reasonable-seeming amount of time to think before deciding to drastically change something important. The logic is that the argument isn't evidence in itself - the evidence is the fact that the argument exists, and that you're not aware of any flaws in it. If you haven't thought about it for a while, the probability of having found flaws is low whether or not those flaws exist - so not having found them yet is only weak evidence against your current position. So basically, "Before you've had time to consider them".
0djcb8yHmmm. Murder decreases the 'expected utility' (cfg. life expentancy), so I think it would still be considered bad in some forms of consequentialism. The corner case - where expected utility would not change (much) would be e.g. shooting somebody who is falling off a cliff who will certainly not survive. More general, it seems ethical systems are usually post-hoc organizing principles for our messy ethical intuitions. However, those intuitions are so messy, that for every simple set of rules, we can find some exception. Hence we get things like the trolley problem...
0Alicorn9ySince I'm not a consequentialist, will you just tell me?
0[anonymous]9yNo. That would defeat one of his stated purposes for posting.
0Alicorn9yHe could PM me, I mean.

I've recently started Redditing, and would like to take the opportunity to thank the LW readership for generally being such a high calibre of correspondent.

Thank you.

2faul_sname9ySubscribe to the smaller subreddits, and also Depthhub. This will drastically improve your procrastination experience.

Let me guess it was one of the top posters

Yes.

who thought your recent criticism of the direction of the community got too much karma.

Yes; his criticism was trivially wrong, as could be seen just by looking at posts systematically.

Or maybe someone who didn't like your responses here.

Actually, I laid out exactly what was wrong with the post: it was a good idea which hadn't been developed anywhere to the extent that it would be worth reading or referring back to, and I gave pointers to the literature he could use to develop it.

The reason I told K... (read more)

8TimS8yI'm not convinced his criticism is wrong. Lukeprog listed lots of substantive recent articles, but I question whether they were progress, given the current state of the community (for example, I'd like more historical analysis a la James Q Wilson) Given the karma, it appears that the community is not convinced the criticism is wrong. Even if Konkvistador is wrong, he isn't trivially wrong.
6gwern8yI think you're shifting goalposts. 'Progress', whatever that is, is different from being insular, and ironically enough, genuine progress can be taken as insularity. (For example, Rational Wiki mocks LW for being so into TDT/UDT/*DT which don't yet have proper academic credentials and insinuates they represent irrational cult-like markers, even though those are some of the few topics I think LW has made clear-cut progress on!) I don't like to appeal to karma. Karma is changeable, does change, and should change as time passes, the karma at any point being only a provisional estimate: I have, here and on Reddit, on occasion flipped a well-upvoted (or downvoted) comment to the other sign by a well-reasoned or researched rebuttal to some comment that is flat-out wrong. Perhaps people simply hadn't looked at the list of recent posts to notice that the basic claim of insularity was obviously wrong, or perhaps they were being generous and like you, read him as claiming something more interesting or subtle or not so obviously wrong like 'LW is not working on non-LW material enough'.
3TimS8yFair enough about karma. But first sentence of Konkvistador's post (after the rhetorical question) says: And the second paragraph of the post begins: That looks a lot like saying, "LW is not working on non-LW material enough"
2gwern8yWell, look through the examples, or heck, posts since then. Do you see people refusing to update? 'No, I refuse to believe the Greeks could have good empirical grounds for rejecting heliocentrism! I defy your data! And ditto for the possibility Glenn Beck wrote anything flattering to our beliefs!'
2TimS8yWhat I mean is that certain methodological approaches are heavily disfavored. Slightly longer version of my point here [http://lesswrong.com/r/discussion/lw/d06/intellectual_insularity_and_productivity/6tan] . Edit: And who is moving the goalposts now? You said "position X" is not trivially wrong. I said, "Here's an example of Konkvistador articulating position X."
2gwern8ySince history is so often employed for political purposes ("It is a principle that shines impartially on the just and unjust that once you have a point of view, all history will back you up"), it's not surprising we don't discuss it much. If, even with this disfavoring, people still think posts like http://lesswrong.com/lw/cuk/progress/ [http://lesswrong.com/lw/cuk/progress/] are worth posting and inspiring pseudohistory like this [http://lesswrong.com/lw/cuk/progress/6qyk] - then this is not a disfavoring I can disfavor. Not that excluding one area is much evidence of insularity. If one declares one will eat only non-apples, is one an insular and picky eater?
3TimS8yI absolutely agree that history is filled with politically motivated bias. But there are actual historical facts (someone won the Siege of Vienna of 1529, and it wasn't the Ottoman Empire). There are historical theories that actually fit most of the facts and pseudo-historical theories that fit carefully selected sets of facts. Being able to tell the difference is a valuable skill that members of this community should try to develop. To put it differently, the falsity of the theory of moral progress has implications for assessing the difficulty of building a Friendly AI, doesn't it?
1gwern8yAnd how does one do that? The problem is that most historical facts are publicly available, so how does one distinguish a theory producing by data mining and overfitting from one that wasn't? The only historian I can think of who has anything close to an answer to that is Turchin [http://lesswrong.com/lw/9pm/minireview_proving_history_bayes_theorem_and_the/6qug] via the usual statistics method of holding back data to test the extrapolations. Turchin and Carrier are discussed occasionally, but not that much; why should I think this is not the right amount of discussion?
2TimS8yThe bigger problem with most historical analysis takes the following form: If you have successfully avoided that trap, congratulations. Society as a whole has not, and this community is not noticeably better than the greater societies we are draw from.
-1Eugine_Nier8yThis is a thick [http://lesswrong.com/lw/cwk/link_thick_and_thin/] problem.
6CharlieSheen8yApologies for the harsh language gwern. I shouldn't have used it. I will edit and retract to correct that.
3CharlieSheen8yI didn't think so. Neither did the many posters who publicly endorsed the post. Also Lukeprog thought the article you found so clearly deficient worthy of inclusion on his productivity list. Either you are wrong and his article isn't crap. Or Luke's standards on what counts as productivity are too low in which case your argument on this criticism of his notion that we aren't making proper progress is that much weaker. Also we have different styles of writing. Have you noticed how people are getting bored of Main? Guess what maybe that's because its becoming a wannabe Academic ghetto dominate with only your style where new posters don't dare contribute. it may seem natural to a natural systematizing archiving [http://www.gwern.net/Links] outlier like you to spend a whole lot of time on your stuff polishing it to perfection, but all this will result in is a whole bunch of a small bag of boring posts of uniformly decent but not extraordinary quality. Isn't it funny that nearly any old Eliezer sequence post dosen't live up to such citation heavy, research made explicit standards you set? Such an article would be upvoted by the common poster make no mistake, but l33t busybodies like you would home in on the technicalities. First of the community obviously disagrees aside from positive comments on his contributions that I could dig up, he has received more karma in the past 30 days than any other single poster and ~7k overall isn't bad at all. And no this wasn't due to mass spamming. His average post has like 5 karma or something. Fracking Nerdling on a stick he's even currently like 50 points ahead of Eliezer HPMOR Yudkowsky who descended down from his throne to write an article answering criticism threatening his funding. If Konkvistador flat out asked you if it would be overall better than the current situation to stop posting at all, and you responded with a yes, then you either lack a social brain, because the right answer is not "yes" but "no, but you should
9ArisKatsaris8yI read your comment, and I downvoted you because it was rude towards gwern, calling him a "damn robot". And I'm one of the guys that urged Konkvistador to stay, in a comment above. That doesn't excuse your rudeness. So you get properly downvoted by me (and gwern got upvoted because I like that he spoke up and declared he was the "top poster" in question and also gave a clear explanation of his reasons). That konkvistador gave gwern's criticism more weight than he should isn't gwern's fault, it's konkvistador's.
0CharlieSheen8yI guess you are right ok I'll edit away the "damn robot" part. My points however haven't been addressed.
2gwern8yYeah, maybe. Other possibilities include being ironic: if he objects to his inclusion on the list... People are getting bored of Main because the best contributors like Yvain or Eliezer have other things to do, and the standard topics are hard to go over again without either repetition or going into depth beyond most readers. It happens: wells run dry or the material becomes too advanced. And everyone else isn't stepping up the plate. So, things become less interesting. I don't criticize the posts because Eliezer uses cites all the time in the sequences, and where he isn't, I often know the citations anyway from past discussions on SL4, standard transhumanist reading materials, the old SIAI Bookshelf, book & paper recommendations, etc. I'm glad that you were able to explain why I and other chatters in #lesswrong sometimes called him by that shortcut: we were just manipulating the IRC crowd. Good grief. Maybe I should just put up IRC logs for the past few days so people can see for themselves what was said...
1CharlieSheen8yThat's not very nice. Apparently LW is big on being nice. See I'm learning. This is the first time I heard about this conversation occurring on IRC. Ok so I'm assuming Konk is a nick people use for him over there. But why use it on LW in this context? Come now, you where trying to communicate "oh look I'm socially near to him". You aren't always the intended audience. Criticism from the perspective of those unfamiliar with Yudkwosky's arguments are more valuable don't you agree? The point of the sequences is to bring people up to speed.
1gwern8yIt's both clever and a dilemma which teaches a relevant point; it may not be nice, but that doesn't matter. Does it matter that it was IRC as opposed to a separate forum website? If it does matter, then perhaps you were jumping to conclusions in interpreting 'off-site'... Sure. But that's by definition criticism I am unable to give and an audience I am not in. Am I to be blamed for preferring the material I learn more from?
2TimS8yI have no expectations at all. But I believe that your stated goals closely match your actual goals - and swearing doesn't advance your stated goals.

Taken straight from the top of Hacker News: Eulerian Video Magnification for Revealing Subtle Changes in the World.

In short, some people have found an algorithm for amplifying periodic changes in a video. I suggest watching the video, the examples are striking.

The primary example they use is that of being able to measure someone's pulse by detecting the subtle variations of color in their face.

The relevance here, of course, is that it's a very concrete illustration of the fact that there's a hell of lot of information out there to be extracted (That Alien... (read more)

Do you consider Stupid Questions Open Thread a useful thing? Do you want new iterations to appear more reagularly? How often?

Even though I didn't ask anything in it, I enjoyed reading it and participating in discussions and I think that it could reduce "go to Sequences as in go to hell" problem and sophistication inflation.

I would like it to reoccur with approximately the regularity of usual Open Threads; maybe not on calendar basis, but after a week of silence in the old one or something like that.

2beoShaffer9yI consider them useful and roughly agree with the time interval you suggest.
0maia8ySeconded.
1Tuxedage9yI too, enjoy Open Threads, and I feel that they should occur with higher frequency, around every week or so.
0vi21maobk9vp9yStupid Questions Open Threads are not simply Open Threads like this one. They are special threads where people come to ask questions that are probably answered in multiple discussions on LW. Latest seen such thread is: http://lesswrong.com/lw/bws/stupid_questions_open_thread_round_2/ [http://lesswrong.com/lw/bws/stupid_questions_open_thread_round_2/]

Convergent instrumental goal: KIll All Humans

Katja Grace lists a few more convergent instrumental goals that an AGI would have absent some special measure to moderate that goal. It seems to me that the usual risk from AI can be phrased as a CIV of "kill all humans". Not just because you are made of atoms that can be used for something else, but because if our goals differ, we humans are likely to act to frustrate the goals of the AGI and even to destroy it, in order to maximize our own values; killing us all mitigates that risk.

If there's still somebody who thinks that the word "Singularity" hasn't lost all meaning, look no further than this paper:

We agree with Vinge's suggestion for naming events that are “capable of rupturing the fabric of human history” (or leading to profound societal changes) as a “singularity” [...] In this paper, we consider two past singularities (arguably with important enough social change to qualify) [...]. The globalization occurring under Portuguese leadership of maritime empire building and naval technological progress is characterized b

... (read more)
[-][anonymous]8y 7

Relevant to thinking about Moldbug's argument that decline in the quality of governance is masked by advances in technology and Pinker's argument on violence fading.

Murder and Medicine The Lethality of Criminal Assault 1960-1999

Despite the proliferation of increasinglydanger ous weapons and the verylar ge increase in rates of serious criminal assault, since 1960, the lethalityof such assault in the United States has dropped dramatically. This paradox has barely been studied and needs to be examined using national time-series data. Starting from the basi

... (read more)

Or maybe someone who didn't like your responses here.

If that's the reason someone asked Konkvistador to leave, then someone deserves less respect than given by Konkvistador. Much less respect.

Here's a math problem that came up while I was cleaning up some decision theory math. Oh mighty LW, please solve this for me. If you fail me, I'll try MathOverflow :-)

Prove or disprove that for any real number $p$ between 0 and 1, there exist finite or infinite sequences $x\_m$ and $y\_n$ of positive real numbers, and a finite or infinite matrix of numbers $\\varphi\_\{mn\}$ each of which is either 0 or 1, such that:

$\\\\\\1$/extract_itex]\sum%20x_m=1%0A\\2)\sum%20y_n=1%0A\\3)\forall%20n\sum%20x_m\varphi_{mn}=p%0A\\4)\forall%20m\sum%20y_n\varphi_{mn}=p) Right now I only know it's true for rational $p$. ETA Now I al... (read more) [-][anonymous]8y 5 Many articles don't use tags at all others often misuse or underuse them. Too bad only article authors and editors can edit tags. I can't count the times I was researching a certain topic on LW and felt a micro annoyance when I found as article that clearly should be tagged but isn't. Could we perhaps make a public list of possible missing or poor tags by author, and then ask the author or an editor to fix it? 4Alicorn8yFeel free to PM me about this sort of thing. 4[anonymous]8yWow, thanks! I will start making a public list somewhere here, so people can comment or add their own and then PM you when say 10 articles in need of tagging are found. :) The risk of supervolcanos looks higher than previously thought, though none are imminent. Is there anything conceivable which can be done to ameliorate the risk? 2gwern9yThe only suggestion I've heard is self-sustaining space colonies, which obviously is not doable anytime soon. Depending on the specifics, buried bunkers might work, as long as they're not on the same continent to be buried in the ash or lava. 0faul_sname9yHow long will it actually take for a self-sustaining colony in LEO to be plausible? We have the ISS and Biosphere 2, and have for quite some time. Zero G poses some problems, but certainly not insurmountable ones. It looks like we have at least a few hundred years of advance notice, which would likely be enough time to set up an orbital colony even with only current technology. Besides, it looks like past a couple hundred miles, the eruption would be survivable without any special, though agriculture would be negatively impacted. 0NancyLebovitz9ySounds like something best left for the future, which I hope will have much better tech. Tectonic engineering? Forcefields? Everyone uploaded? 1gwern9ySome existential risks simply may be intractable and the bullet bitten. It's not like we can do anything about [http://lesswrong.com/lw/cih/value_of_information_8_examples/6o6y] a vacuum collapse either. I am doing a study on pick-up artistry. Currently I'm doing exploratory work to develop/find an abbreviated pick-up curriculum and operationalize pick-up success. I've been able to find some pretty good online resources*, but would appreciate any suggestions for further places to look. As this is undergraduate research I'm on a pretty nonexistent budget, so free stuff is greatly preferred. That said I can drop some out of pocket cash if necessary. If anyone with pick-up experience can talk to me, especially to give feedback on the completed materials that would be great. *Seduction Chronicles and Attractology have been particularly useful If you will need to convince a professor to someday give you a passing grade on this work I hope you are taking into account that most professors would consider what you are doing to be evil. Never, ever describe this kind of work on any type of graduate school application. Trust me, I know a lot about this kind of thing. 3Kaj_Sotala9yI'd be curious to hear more about the details of that episode. 7James_Miller9yI wrote up what happened for Forbes. [http://www.forbes.com/forbes/2004/0607/054.html]. I later found out that it was Smith's President not its Board of Trustees that finally decided to give me tenure. 3Kaj_Sotala9yHuh. I knew that academia had a liberal bias, but I didn't know it was quite that bad. 2beoShaffer9yThe professor who will be grading this has actively encouraged this research topic and multiple other professors at my school have expressed approval, with none expressing disapproval. 3James_Miller9yImpressive. What school? 0beoShaffer9yKnox College [http://www.knox.edu/] 1KPier9yYour article describes the consequences of being perceived as "right-wing" on American campuses. Is pick-up considered "right wing"? Or is your point more generally that students do not have as much freedom of speech on campus as they think? I'm specifically curious about the claim that most professors would consider what you are doing to be evil. Is that based on personal experience with this issue? 7James_Miller9yRacism, sexism and homophobia are the three primary evils for politically correct professors. From what I've read of pick-up (i.e. Roissy's blog) it is in part predicated on a negative view of women's intelligence, standards and ethics making it indeed sexist. See this [http://en.wikipedia.org/wiki/Lawrence_Summers#Differences_between_the_sexes] to get a feel for how feminist react to criticisms of women. Truth is not considered a defense for this kind of "sexism". (A professor suggested I should not be teaching at Smith College because during a panel discussion on free speech I said Summers was probably correct.) I've never discussed pick-up with another professor but systematically manipulative women into having sex by convincing them that you are something that you are not (alpha) would be considered by many feminist, I suspect, as form of non-consensual sex. 2[anonymous]8yHow comes they describe that in terms of 'convincing them that you are something you are not' rather than 'becoming something you didn't use to be'? Do they think people have an XML tag attached that reads 'beta' or something, independent of how they behave and interact? To me, the idea of convincingly faking being alpha makes as much sense as that of convincingly faking being fluent in English, and sounds like something a generalization of the anti-zombie principle would dismiss as utterly meaningless. 2Viliam_Bur8yThe given "something" is a package consisting of many parts. Some of them are easy to detect, some of them are difficult to detect. In real life there seems to be a significant correlation between the former and the latter, so people detect the former to predict the whole package. After understanding this algorithm, other people learn the former parts, with intention to give a false impression that they have the whole package. The whole topic is difficult to discuss, because most package-detectors have a taboo against speaking about the package (especially admitting that they want it), and most package-fakers do not want to admit they actually don't have the whole package. Thus we end with very vague discussions about whether it is immoral to do ...some unspecifed things... in order to create an impression of ...something unspecified... when ...something unspecified... is missing; usually rendered as "you should be yourself, because pretending otherwise is creepy". Which means: "I am scared of you hacking my decision heuristics, so I would like to punish you socially." 1[anonymous]8yWhat is it that is difficult to detect in a person and still people care about potential partners having it? Income? (But I don't get the impression that the typical PUA is poverty-stricken, and I can't think of reasons for people to care about that in very-short-term relationships, which AFAIK are those most PUAs are after.) Lack of STDs? (But, if anything, I'd expect that to anticorrelate with alpha behaviour.) Penis size? (But why would that correlate with behaviour at all?) -2Viliam_Bur8yI guess it is how the person will behave in the future, and in exceptional situations. We can predict it based on person's behavior here and now, unless that behavior is faked to confuse our algorithms. Humans are not automatically strategic and nature is not antropomorphic, but if I tried to translate the nature's concerns for "I want an alpha male", it would be: "I want to be sure my partner will be able to protect me and our children in case of conflict." This strategy is calibrated for an ancient environment, so it sometimes fails, but often it works; some traits are also useful now, and even the less useful traits still make impression on other people, so they give a social bonus. (For example higher people earn more on average, even if their height is not necessary for their work.) Of course there is a difference between what our genes "want" and what we want. I guess a typical human female does not rationally evaluate male's capacity of protecting her in combat, but it's more like unconscious evaluation + halo effect. A male unconsciously evaluated as an alpha male seems superior in almost everything; he will seem at the same time stronger, wiser, more witty, nicer, more skilled, spiritually superior, whatever. A conflicting information will be filtered away. ("He beat the shit out of those people, because they looked at him the wrong way, and he felt the need to protect me. He loves me so much! No, he is not agresssive; he may give that impression, but only because you don't really know him. In fact he is very gentle, he has a good heart and wouldn't wish no harm to anyone. He is just a bit different, because he is such a strong personality. Don't judge him, because you don't know him as much as I do! And he did not really murder that guy in 2004, he was just framed by the corrupt police; he explained me everything, because he trusts me. And by the way the dead guy deserved it.") Anyway, preference is a preference, you cannot explain it away. (Analogica 0[anonymous]8yWhat I meant to ask was what kind of information about someone is hard to detect in a few hours of face-to-face interaction but would still affect someone else's willingness to have a (usually very-short-term) sexual relationship with them, regardless of whatever evolutionary reasons caused such a preference to exist. (So, in your example, the equivalent “men like women with big breasts” would be a valid answer, but the equivalent of “men like women who could produce lots of milk for their children” wouldn't.) And I didn't mean that as a rhetorical question. (FWIW, and I think I've read this argument before, it would make evolutionary sense for women to have different preferences for one-night stands than for marriage, because if you had to choose between a healthy man and a wealthy one you'd rather your child was raised by the latter but (unbeknownst to him) had half the genes of the former.) 0Viliam_Bur8yI think humans have general preference for "reals value" as opposed to faking their reward signals (a.k.a. "wireheading"). Of course sometimes we fake the reward signals, because it is pleasant and we are programmed to seek pleasure; but if we did it without restraints, our survival value would go down. So when someone enjoys "fake values" too much, they will get negative social feedback, because by putting themselves in danger they also decrease their value as an ally. So a part of mechanism that warns women against "fake alpha males" may be a general negative response against "fake values", not necessarily related to specific real risks of having one-night sex on birth control with a fake alpha versus with a real alpha. Another part could be this: it is good for a woman to have sex with a man whom other women consider attractive. (If the man is unattractive to other women, perhaps he has some negative trait that you did not notice, so it is better to avoid him anyway, because you don't want to risk your child to inherit a negative trait.) On the level of feelings -- not being a woman I can only guess here -- the information, or just a suspicion, that a man is unattractive to other women, probably makes the man less attractive. (It is a perception bias.) Simply said: "women like men liked by other women"; and they honestly like them, not just pretend that they do. The idea of a "fake alpha male" (a PUA) probably evokes an image of man who was unattractive to women he met yesterday, and who is unattractive even today in moments where he stops playing by the PUA rules and becomes his old self. Therefore he is an unattractive man, who just uses some kind of reality-distortion technique to appear attractive. -- An analogy would be an ugly woman using hypnosis to convince men that she is a supermodel. The near-mode belief in existence of such women would make many men feel very uncomfortable, and they would consider "speed hypnosis" lessons unethical. (For better ana 2[anonymous]8yI mean, what's the difference between a fake alpha male and someone but didn't use to be an alpha male but has since become one? Is someone who didn't grow up speaking English but now does a “fake English speaker”? Don't lots of men drink alcohol in order for women to look more attractive to them? :-) 1Viliam_Bur8yCongruency. If someone became an alpha male by PUA training, they will probably have the "visible" traits of alpha male, but lack the "invisible" traits (where "invisible" is a short for "not easy to detect during the first date"), because the training will focus on the "visible" traits. Unless it is a PUA training [http://www.blueprintdecoded.com/] that explicitly focuses on teaching the "invisible" traits (because they believe that this is the best way to learn and maintain the "visible" traits in long term). At this point people usually begin to discuss the definition of the term "PUA". People who like PUA will insist that such trainings belong under the PUA label, and perhaps they are the ultimate PUA trainings, the results of decades of field research. People who dislike the PUA will insist that the label "PUA" should be used only for those surface trainings that create "fake alpha males", which is a bad thing, and that any complex personality improvement program is just old-fashioned "manning up", which is a good thing, and should not be confused with the bad thing. This battle for the correct definition is simply the battle for attaching the bad or good label to the whole concept of PUA. Good story. Someone who didn't use to be an alpha male but has become one often has a good story that explains why it happened. A story "first I was pathetic, but then I paid a lot of money to people who taught me to be less pathetic, so I could get laid" is not a good story. A good story involves your favorite dog dying, and you being lost in a jungle after the helicopter crash, feeding for weeks on scorpions and venomous snakes. Or spending a few years in army. If a miracle transformed you to an alpha male, your past is forgiven, because there is a clean line between the old you and the new you. Also if the shock was enough to wake you up, then you probably had a good potential, you just didn't use it fully; you had to be pushed forward, but you found the way instinctive 2[anonymous]8yAt least for short-term relationships, people don't actually want good genes; they want things which correlated with good genes in the ancestral environment. (Not all men would be outraged by the possibility that a woman has undergone breast enlargement surgery, for example.) 1[anonymous]8y(Why was that downvoted? It didn't explicitly answer my question, but it also contains lots of interesting points. Upvoted back to zero) My question was what those invisible traits are. Well, I guess if someone just didn't have "a good potential" it'd be hardly possible for them to learn PUA stuff anyway, much like it'd be hardly possible for someone with an IQ of 75 to learn computational quantum chromodynamics (or even convincingly faking a knowledge thereof). I'm not terribly familiar with PUAs, but I was under the impression that most of their disciples are healthy, non-poor people who for some reason just didn't have a chance to learn alpha behaviour before (say, they weren't as interested in relationships as they are now, or they've just broken up from a ten-year-long relationship they had started when they were 14, or something). 2Viliam_Bur8y* staying alpha when the situation becomes more intense. (A fake alpha may behave like a real alpha while in the bar, but lose his coolness when alone with the girl in her room. Or may behave like a real alpha the first night, but lose his coolness if they fall in love and a long-term relationship develops.) * heroic reaction in case of a real threat. (A fake alpha is only trained to overcome and perhaps overcompensate for shyness in social situations where real danger is improbable.) * other kinds of consistency. (A fake alpha may forget some parts of alpha behavior when he is outside of the bar, in a situation his PUA teachers did not provide him a script for. For example he does not fear to say "Hello" to a nice unknown girl, but still fears to ask his boss for a higher salary.) Rationally, this should not be a problem for a one-night stand, if the probability of a real threat or falling in love is small. However, thinking that someone might have this kind of problem, can reduce his attractivity anyway. 1[anonymous]8yThanks. 0James_Miller8yYes, in our DNA. 1wedrifid8yOur DNA can give different degrees of bias towards different competitive strategies. It doesn't determine status or behavior in a given situation. 0Kaj_Sotala8yI think that was the point. 1wedrifid8yAs a reply to the quoted context the point was James made was false. 0[anonymous]8yI'd guess that, except as far as physical attractiveness is concerned¹, socialization is much much much more relevant than DNA. A clone of Casanova raised in a poor, devoutly religious family in an Islamic country wouldn't become terribly good at picking up girls. ¹And even there, grooming does a lot. [-][anonymous]8y 4 People here generally reading a big part of the sequences is important for participating in debate. I see large influence on the thinking of people on LessWrong from non-sequence and indeed non-LW writing such as Paul Grahams writing on Keeping your identity small or What you can't say. Why don't we include these in the promotion of material aspiring rationalists should ideally read? Now consider building such a list. Don't include entire books. While a required reading list might complement the sequences nicely especially when Eliezer finally gets around ... (read more) 3shminux8yI'm pretty sure that one could extract a full sequence from Patrick McKenzie's blog [http://www.kalzumeus.com/greatest-hits/]. 1albeola8yhttp://knowyourmeme.com/photos/88818-philosoraptor [http://knowyourmeme.com/photos/88818-philosoraptor] Could someone involved with TDT justify the expectation of "timeless trade" among post-singularity superintelligences? Why can't they just care about their individual future light-cones and ignore everything else? 6wedrifid9yPeople (with the exception of Will) have tended not to be forthcoming with public declarations that the extreme kinds of "timeless trade" that I assume you are referring to are likely occur. (There are a few reasons of various levels of credibility, but allow me to speak to the most basic application.) If an agent really doesn't care about everything else then they can do that. Note that just caring about their individual future light-cones and ignoring everything else means: * You would prefer to have one extra dollar than to have an entire galaxy just on the other side of your future light cone transformed from being tiled with tortured humans to being a paradise. * If the above galaxy were one galaxy closer - just this side of your future light cone - then you will care about it fully. * Your preferences are not stable. They are constantly changing. At time t you assign x utility to a state of (galaxy j at time t+10). At time t+1 you assign exactly 0 utility to the same state of (galaxy j at time t+10). * Such an agent would be unstable and would self modify to be an agent that constantly cares about the same thing. That is, the future light cone of the agent at time of self modification. Those aren't presented as insurmountable problems, just as implications. It is not out of the question that some people really do have preferences that literally care zero about stuff across some arbitrary threshold. It's even more likely for many people to have preferences that care only a very limited amount for stuff across some arbitrary threshold. Superintelligences trying to maximize those preferences would engage in no, or little acausal trade with drastically physically distant superintelligences. Trade - including acausal trade - occurs when both parties have something the other guy wants. 0Mitchell_Porter9ySo it seems that selfish agents only engage in causal trade but that altruists might also engage in acausal trade. 3wedrifid9yWe can't quite say that. It is certainly much simpler to imagine scenarios where acausal trade between physically distant agents occurs if those agents happen to care about things aside from their own immediate physical form. But off the top of my head "acausal teleportation" springs to mind as something that would result in potential acausal trading opportunities. Then there are things like "acausal life insurance and risk mitigation" which also give selfish agents potential benefits through trade. [-][anonymous]8y 3 A user who's judgement I deeply admire has told me off site that my posts are harmful to the community and it is better that I stop posting. I will respect his opinion and discontinue posting until further notice. Please down vote this post if I make responses after it. Thanks for all the fun and cool conversation! It was a great ride while it lasted, I will try to live up to the spirit of LW in the future. First Checkpoint I delayed the break from LW because of some of the feedback to this post as well as plain force of habit. I did some posts I considered... (read more) until further notice. You should note this on a calendar or something: two months from now you should re-evaluate your position. It seems to me like there's a chance you'll change to the point you're net positive; re-evaluation is cheap; that small chance should be allowed for, not discarded. [-][anonymous]8y 10 This sounds like a good idea. I will do so. I'm sorry to see you go. I do agree with gwern that your recent critical lamentations have been a negative contribution. Particularly because I find it is too easy to be influenced towards cynicism. However your recent dissatisfaction aside your contributions in general are fine, making you a valuable community member. I never see the name "Konkvistador" and think "Oh damn, that moron is commenting again", which puts you ahead of rather a lot of people and almost constitutes high praise! I can perhaps empathise with becoming disgruntled with intellectual standards on lesswrong. People are stupid and the world is mad - including most people here and everywhere else I have interacted with humans. I recently took a whole 30 days off, getting my score down to '0', weakening the addiction and also relieving a lot of frustration. I enjoy lesswrong much more after doing that. Hopefully you decide to return some time in the future as well. 4TimS8yHonestly, I think you were too easily mollified by lukeprog - for the reasons I said to him there. 8wedrifid8yI tend to agree with Shokwave's replay. Lesswrong users not learning a bunch of history is not a big deal. The subject is fairly boring. Someone else can learn it. Lesswrong isn't supposed to be a site where all users must learn arbitrary amounts of information about arbitrary subjects. Most people have better things to do. Jeez. You've been the top contributor in the past 30 days. This departure of yours is the most harmful thing you've ever done to the community. I wish you'd stay. This is bloody stupid. Please don't go. If someone from my cluster of ideaspace told you that you detracted from the community - they are wrong. 9drethelin8yI find your style of commenting both fun to read and interesting. I think your posts are valuable even if they're more "thinking out loud" than "I have studied ALL THE LITERATURE". As a community I think we can and SHOULD be able to talk about things in ways that don't involve 50 citations at the bottom of the page, even though I think those posts are valuable. I don't know who you're scaring away with your amount of commenting, but I don't miss them. 5thomblake8ySlowing down is called-for, stopping is not. You're a valued member of the community. 5Vladimir_Nesov8yIt's somewhat plausible that 20 comments a day may be too much (in someone's perception), or that it's better to develop certain kinds of posts more carefully, maybe even to avoid certain topics (that would shift the focus of conversations on LW in an undesirable direction), but it's not a case for not posting at all. (That is, the questions of whether Konkvistador's posts are slightly harmful for the community (in what specific way) and whether the best intervention in response to that hypothetical is to discontinue posting entirely don't seem to me clearly resolved, and low rate of posting seems like a better alternative for the time being, absent other considerations.) whether Konkvistador's posts are slightly harmful for the community It is ridiculous to argue that an eloquent and prolific poster who actually seems to have read the motherfucking sequences and doesn't get tired of trying help new people access them (a rare trait these days) is causing harm. Even if that was so for every single thing he wrote, and note that when Lukeprog cites against his argument that productivity and openness to outside ideas on LW is lower than it should be, the bundle includes many of Konkvistador's posts as examples of openness and productivity! Imagine that! At the very least his excellent taste in outside links that he regularly shares with the community make him definitely a signal not a noise man. But please lets pile on him. I bet soon someone will bring up how he "violated the mindkilling" taboo or even acusse him of getting "minkilled". 6Vladimir_Nesov8yYour rhetoric is for dismissing the question as ridiculous. I suggest actually considering the question, and expect that the answer accepted by Konkvistador is wrong on both levels (their contributions don't seem harmful on net, there are multiple meanings of "harmful" that should be addressed separately with different interventions, and stopping participation entirely doesn't seem to be the best response to the hypothetical of their contributions being harmful in some of these senses). (For example, it's likely that for most posters, there is some aspect of their participation that is harmful, and the appropriate response is to figure out what that aspect is and fix it. So it's useful to consider these questions.) 1CharlieSheen8yMy rhetoric is what it is, I'm pissed. Feel free to make an argument for why Konkvistador's output is on net "harmful", I will try to consider it properly. Though naturally we are left at a disadvantage here, since we will likely only ever hear one side of the story. The man himself has probably already scrambled his password or something and won't be putting up a defence. My rhetoric is what it is, I'm pissed. Feel free to make an argument for why Konkvistador's output is on net "harmful", I will try to consider it properly. This is not my argument, please re-read the discussion when you calm down. 5CharlieSheen8yI generally love his 20 comments a day. 4[anonymous]8yOk this didn't work. Please down vote parent and this comment to punish me for posting. Edit: Ok who up voted this. Not funny. :/ 4wedrifid8yI don't downvote on request (for reasons I have occasionally expressed I consider that self control strategy to be poor). Go edit C:\Windows\System32\drivers\etc\hosts or /etc/hosts to point lesswrong to 127.0.0.1. Works for me. In fact, whenever I get the slightest impulse to go look at lesswrong I deliberately and actively type in lesswrong.com, anticipating somewhat eagerly the Server Not Found message and giving myself a mental reward. This was amazingly effective in achieving extinction [http://en.wikipedia.org/wiki/Extinction_(psychology$) in an excessively reinforced behavioral pattern.
0[anonymous]8yThis is useful advice. Thank you.
3TimS8yKonkvistador, For some of us, your behavior looks like evaporative cooling [http://lesswrong.com/lw/lr/evaporative_cooling_of_group_beliefs/] from the inside. For those of us who don't want cooling, this is not a good thing. But I respect you too much to upvote something you don't want upvoted.
0[anonymous]8yNot funny. :/
3[anonymous]8yI'm not sure why Gwern's and Nesov's replies are being downvoted to the point that they are hidden. Surely there is disagreement, but I see the quality of their posts as high. I urge voters to vote on the quality of the posts, not whether you agree/disagree with them.
4CharlieSheen8yBut even if I'm wrong in this case, it seems obvious we have a split community on this. I'm better the subconscious parts of the brains of the "Top Poster" Clique are running their little hamster wheels trying to find clever reasons why to associate with the high status gwern rather than low status absent underdog.
2TimS8yTwo points: * I don't know what's going on inside your head, but this looks like motivated cognition from the outside. * Regardless of why you are saying this, it doesn't help change the community norm in the direct that you seem to want.
-2CharlieSheen8yI am on a drug. It’s called Charlie Sheen. It’s not available because if you try it you will die. Your face will melt off and your children will weep over your exploded body. I'm on a quest. [http://www.youtube.com/watch?v=9QS0q3mGPGg] I'm not sure there is any hope for this community. But ok you seem reasonable, I'll quit and let the conformist contrarian wolves tear K's corpse.
1TimS8yQuitting doesn't advance your goals either. If your goal isn't posturing for your own emotional satisfaction, stop doing posturing and do some real work. Do the impossible [http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/] and try to actually convince the community that gwern's advice was bad.
-3CharlieSheen8yThat isn't a job for CharlieSheen. Though I think if you look past the rude language you will find the arguments sound. But yes I was in the wrong here on tone.
-2CharlieSheen8yGwern is wrong. Its that simple. edit Corrected typo. 2nd edit Haters gonna hate. I'd love to hear some actual arguments though clique men. So predictable on LW someone bitches about karma and you insta up vote him and downvote the opponent. In a prisoners dilemma with the options of defect or cooperate, LWers always pick CONTRARIAN.
3TimS8yGwern is being flippant, but what's wrong with Nesov's statements.
4CharlieSheen8yYeah I guess I can agree. I originally misread him. He apparently dosen't think K's been on net "hurting" the community. I've edited my posts to reflect this. So apologies to Nesov.
1lsparrish8yIn my opinion this kind of undefensive humility is something to be celebrated. Good job, Konk!
[-][anonymous]8y 3

Meta

Guys I'd like your opinion on something.

Do you think LessWrong is too intellectually insular? What I mean by this is that we very seldom seem to adopt useful vocabulary or arguments or information from outside of LessWrong. For example all I can think of is some of Robin Hanson's and Paul Graham's stuff. But I don't think Robin Hanson really counts as Overcoming Bias used to be LessWrong.

But for the most part not only has the LessWrong community not updated on ideas and concepts that haven't grown here. The only major examples fellow LWers brought up ... (read more)

[This comment is no longer endorsed by its author]Reply

It occurred to me that on this forum QM/MWI discussions are a mind-killer, for the same reasons as religion and politics are:

As a rule, any mention of religion on an online forum degenerates into a religious argument. Why? Why does this happen with religion and not with Javascript or baking or other topics people talk about on forums?

What's different about religion is that people don't feel they need to have any particular expertise to have opinions about it. All they need is strongly held beliefs, and anyone can have those. No thread about Javascript wi

... (read more)
3wedrifid8yNot particularly. To the extent that it is a mind killer it is a mind killer in the way discussions of FAI, SIAI capabilities, cryonics, Bayesianism or theories like this [http://lesswrong.com/lw/bxr/muehlhauserwang_dialogue/] are. Whenever any keyword suitably similar to one of these subjects appears one of the same group of people can be expected to leap in and launch into an attack of lesswrong, its members, Eliezer, SingInst or all of the above - they may even try to include something on the subject matter as well. The thing is most people here aren't particularly interested in talking about those subjects - at least they aren't interested in rehashing the same old tired arguments and posturing yet again. They have moved on to more interesting topics. This leads to the same abysmal quality of discussion - and belligerent and antisocial interactions - every time.
-1vi21maobk9vp8yAny FAI discussions are mindkilling unless they are explicitly conditional on "assuming FOOM is logically possible". After all, we don't have enough evidence to bridge the difference in priors, and neither side (AI is a risk/AI is not a risk) explicitly acknowledges that fact (and this problem makes them sides more than partners).
1Nornagest8yI'm not sure I agree with Graham on the exact mechanics there. There are a number of mindkilling topics where empirically supportable answers should in principle exist: the health effects of obesity, for example. Effects of illegal drugs. Expected outcomes of certain childrearing practices. Expertise exists on all these topics, and you can prove people wrong pretty conclusively with the right data set, but people -- at least within certain subcultures, and often in general -- usually feel free to ignore the data and expertise and expound their own theories. This is clearly not because these questions lack definite answers. I think it's more because social approval rides on the answers, and because of the importance of the social proof heuristic and its relatives. QM interpretation may or may not fall into that category around here.
2taelor8yGraham actually agrees with you; the essay quoted above continues:

Very difficult words to spell, arranged for maximum errors-- the discussion includes descriptions of flash recognition of errors.

A disproportionate number of people involved with AI risk mitigation and the Singularity Institute have graduated from "Elite Universities" such as Princeton, Harvard, Yale, Berkeley, and so on and so fourth. How important are Elite Universities, besides from signalling status and intelligence? How important is signalling status by going to an elite University? Are they worth the investment?

0D_Malik8yA while back John_Maxwell linked me to this [http://www.overcomingbias.com/2009/03/college-prestige-matters.html] OB post and this [http://www.halfsigma.com/2010/07/surprising-news-harvard-best-school-to-go-to.html] post on halfsigma.

Disturbed to see two people I know linking to Dale Carrico on Twitter. Is there a standalone article somewhere that tries to explain the perils of trying to use literary criticism to predict the future? [EDIT: fixed name, thanks for the private message!]

1Risto_Saarelma9yI found Charlie Stross' recent blog post [http://www.antipope.org/charlie/blog-static/2012/05/deconstructing-our-future.html] endorsing Carrico strange. Carrico has his baggage, but why is Stross suddenly so intent on painting transhumanism as a the evil mutant enemy?
1JoshuaZ9yI think what we're seeing there is pretty easy- Stross says so himself- he's annoyed at the current economic situation which punctured the ridiculously optimistic sort of thing he had in Accelerando. This is to some extent getting wrapped up in his short term political concerns.

Theories of Big Universes or Multiverses abound-- Inflation, Many Worlds, mathematical universes etc. Given a certain plausible, naturalistic account of personal identity (that for you to exist merely requires there to be something psychologically continuous with earlier stages of your existence) if any of these theories is true we are immortal (though not necessarily in the pleasant sense).

Questions: Is the argument valid? What are the chances that none of the multiverse theories are true? What, if anything, can we say about the likely character of this a... (read more)

1wedrifid8yOnly with a usage of "immortal" that abandons the cached thinking and preferences [http://lesswrong.com/lw/6op/preference_for_many_future_worlds/] that we usually associate with the term.
1Nisan9yI think if you fully taboo the concepts of "personal identity" and "existence", the argument evaporates. Before tabooing, your argument looks like this: 1. Alternate universes, containing persons psychologically continuous with me, exist. 2. Persons psychologically continuous with me are me. 3. Therefore I am immortal. 4. Therefore I should anticipate never dying. 5. Therefore I should make plans that rely on never dying. On the face of it, it seems sound. But after tabooing, we're left with something like: 1. Our best, most parsimonious models of reality refer to unseen alternate universes and unseen persons psychologically continuous with each other. 2. In such models, alternate futures of a person-moment have no principled way of distinguishing one of themselves as "real". 3. Therefore, our models of reality refer to arbitrarily long-lived versions of each person. ... 4. Therefore I have reason to anticipate never dying. 5. Therefore I have reason to act as if I will never die. Edit: My #4 and #5 were nonsense.
1Jack8yI either don't understand your re-write or don't understand how it dissolves the argument.
2Nisan8yOh, I should have been more explicit: I think there's a big logical leap between steps 3 and 4 of the rewritten argument, as indicated by the ellipsis. (Why is our models of reality referring to arbitrarily long-lived versions of each person a reason to act as if I will never die?) It's far from clear that this gap can be bridged. That's why I said the argument evaporates.

I'm thinking of writing a series of essays regarding applied rationality in terms of politics and utilitarianism, and the ways we can apply instrumental rationality to better help fight the mind-killingness of political arguments, but I'd like to make sure that lesswrong is open to this kind of thing. Is there any interest in this kind of thing? Is this against the no-politics rule?

4[anonymous]9yIn other words, you want to write essays on how people can have political conversations without being mindkilled? I don't think it would violate the "no politics" rule. Just please keep it sufficiently meta to avoid mindkilling people reading the essay. So if you give an example of a non-mindkilling political conversation, don't talk about a current day issue. Instead, talk about 17th century France and it's use of mercantilism. How might a king and his advisers come to the best policy decision regarding whether or not to allow free trade with England? If they so choose, people can apply those principles to actual political discussions somewhere other than LW. That said, I don't think I'd read the essay. I try to avoid politics, meta or otherwise. It does sounding interesting, though. Best of luck with it. ( I hope it's obvious, but just in case... This is just my view, not necessarily an expression of LW policy or consensus.)
1NancyLebovitz9yI'm definitely interested. It's probably worth distinguishing between avoiding getting mind-killed oneself and trying to get other people out of mind-killed mode.

What does it mean for a hypothesis to "have no moving parts"? Is that a technical thing or just a saying?

2dbaupp9yNot really either, it's a neat way of saying that a hypothesis doesn't actually explain anything: it doesn't provide a deeper explanation for the phenomenon in question (this explanation is the "moving parts"). A hypothesis allows you to make predictions; a good one will clearly express how and why various factors are combined to make the prediction, while a bad one will at best give the "how" without providing any deeper understanding. So a bad hypothesis is a little like a black box [https://en.wikipedia.org/wiki/Black_box] where the internal mechanism is hidden (sometimes, "no moving parts" might be better expressed as "unknown moving parts"). -------------------------------------------------------------------------------- This idea occurs in the sequences, but the best explanation of the meaning I can find there is (source [http://lesswrong.com/lw/qe/do_scientists_already_know_this_stuff/]): (The "secret sauce" refers to the deeper explanation.)
1billswift9yInteresting, I hadn't encountered that in any of my studying, just seen it in passing, but with my mechanical and other technical experience (limited though it is) I automatically interpreted "no moving parts" as a good thing. Another case where sloppy writers should have thought things through a little further. ADDED: Anyone with an engineering background would have thought the same, my experience is limited but every engineering design book stresses reducing or eliminating moving parts as a good thing. For anyone interested, Ferguson's Engineering and the Mind's Eye is a wonderful, comprehensive look at engineering design for general audiences.
3Oscar_Cunningham9yI think the analogy holds. Hypotheses with too many "moving parts" can predict anything and so tell you nothing (they overfit the data). Hypotheses with too few moving parts aren't really hypotheses at all, just passwords like "phlogiston" that fail to explain anything (they underfit the data). Analogously a mechanism with too many parts takes a lot of effort to get right, and it's weaknesses are hidden by its complexity. But if someone tried to sell you a car with no moving parts, you might be suspicious that it didn't work at all. As Einstein said, things should be as simple as possible, but no simpler.

Liron's post about the Atkins Diet got me thinking. I'd often heard that the vast majority of people who try to lose weight end up regaining most of it after 5 years, making permanent weight loss an extremely unlikely thing to succeed at. I checked out a few papers on the subject, but I'm not good at reading studies, so it would be great to get some help if any of you are interested. Here are the links (to pdfs) with a few notes. Anyone want to tell me if these papers really show what they say they do? Or at any rate, what do you think about the feasibilit... (read more)

How anthropomorphizing penguins held up the research by some 100 years: link.

(Of course, the premise of the comic is incompatible with the Singularity, since human-level AIs are widespread as companions, without ever going FOOM.)

Anyone else try the Bulletproof diet? Michael Vassar seems to have a high opinion of Dave Asprey, the diet's creator.

4Micaiah_Chang9yI have. I had something like 50-30% adherence to it during June-September, something like adherence 4 days out of a week from October until the start of December, one month off during December because of family obligations and then mostly bulletproof from the new year onwards, with me ordering the coffee and everything. I would say that I go "off" the diet about one meal per two weeks, but as much as possible "slightly" (for example, a single meal with enough rice in it to take me out of ketosis). Since I've started the diet, I've also made other adjustments, such as striving for at least seven hours per night, scheduling my time better to reduce stress and not going hungry because I'm too unmotivated to make food. Keep these in mind on top of all the other 'wacky' self experiment biases. (Alternatively I CAN PRIME YOU TO ASSOCIATE ME WITH LOW STATUS BY TYPING IN ALL CAPS!!!1) 1. Last year, I weighed about 177 +/-3 pounds. Now I weigh about 145 +/- 2 lb (Edit: actually 135 at the time of the post. Reweighed myself and that was the new weight.). I reached 145lb during late September while not fully on the diet and maintained it, except for an unknown period in late December - mid January where I went up to about 152lb, I think that implies something back home caused me to gain weight, although whether it's food or vacation loafing is hard to say. 2. I used to have consistent problems staying awake in morning classes, and large amounts of brain fog after meals. These problems have largely vanished, except when I take a meal off the diet or have had insufficient sleep for other reasons such as (see 3). 3. I, unlike wmorgan, have been food poisoned. I think I narrowed the likely causes down to either eating factory eggs from chickens without antibiotics used on them or improperly preparing chicken livers. Since then, I have replaced the factory eggs with pastured ones with cooked whites (which also happen to tas
4Jayson_Virissimo9yI drink a concoction loosely based on Bulletproof Coffee [http://www.bulletproofexec.com/how-to-make-your-coffee-bulletproof-and-your-morning-too/] almost everyday and have been following a mostly Paleo Diet for over half a year now. I don't want to oversell it, but these kinds of dietary changes have been extremely valuable to me. I originally planned to run a series of about 3-5 diet experiments, but I had so much improvement on the first one I tried, I just stuck with it and made it a life-style change (although, I still make minor tweaks here and there). I made much of my data publicly available here [http://jayquantified.blogspot.com/]. Let me know if you have any questions.
1drethelin9yTell me more about your concoction. At the moment I drink one of these [http://www.gnc.com/product/index.jsp?productId=2601264] every morning as the best balance I've found between being palatable and having high protein and low carbs but am thinking about changing. I'm lactose intolerant so I don't know if I can follow the butter advice but MCT might have potential.
0Jayson_Virissimo9yI add and subtract new ingredients every few weeks, but this morning I had some Tierra Del Sol medium roast coffee, low-carb sugar-free vanilla protein powder, creatine powder, chia seeds, MCT oil, coconut milk, organic grass-fed butter, with a pinch of stevia.
0James_Miller9yI'm also lactose intolerant but I have no trouble consuming large quantities of grass fed butter, MCT oil and coconut oil.
2wmorgan9yI've been trying to adhere to it for a year or so. My main point of departure is that I drink a lot of diet soda and beer. My results: 1. I lost five pounds in the first two months and the weight didn't come back, despite consuming slightly more calories, and a lot more calories from fat. 2. It's easy for me not to graze on simple carbohydrates, because I feel fuller. Regardless of your nutritional philosophy, most of us agree that potato chips and cookies do nothing for you. 3. I haven't gotten or given anyone food poisoning or any other indication that my food is too undercooked. Especially for beef and lamb, I strongly suspect that I could eat it raw and be OK. Similarly, but not so much, for eggs, fish, and pork. I still cook the pink out of chicken, but I'm eating much less chicken anyway.
1drethelin9yI can say that grass-fed roast beef is hugely tastier than regular grocery store sandwich meat, and is a big boon to my low carb diet. I started out doing the 4 hour body diet in december and lost 15 or so pounds fairly fast, and then started to stagnate. I ended up cutting beans also and am now going back down again. I don't strictly follow the bulletproof diet but mine may be a data point for similar diets.
[-][anonymous]9y 2

Anything non-obvious in job searching? I'm using my university's job listings and monster.com, but I welcome any and all advice as this is very new to me. While I won't ask, "What is the rational way of looking for jobs?" I will ask, "How can I look for jobs more effectively than with just online job postings?"

[This comment is no longer endorsed by its author]Reply

Anything non-obvious in job searching?

This depends on what you consider obvious. (Many things that seem obvious to me now, would be a great advice 10 or 15 years ago; sometimes even 1 year ago.) Also there is a difference between knowing something and feeling it; or less mysteriously: between being vaguely aware that something "could help" and having an experience that something trivial and easy to miss did cause 50% improvement in results. So at the risk of saying obvious things:

Don't be needy. Search for a job before you have to; that is before you run out of money. Some employers will take a lot of time; first interview, a week or two waiting, second interview, another week or two, third interview... be sure you have enough time to play this game. If a month later you get an offer that is not what you wanted, be sure to have a freedom to say "no".

Speak with more companies. If you get two great offers, you can always take one and refuse the other. If you get two bad offers (or one bad offer and one rejection), your only choices are to take a bad offer, or start the whole process again, losing a month of your time. How many companies is enough? You probably ... (read more)

6wmorgan9yI notice you have a STEM degree. Since the job market is in your favor, I'll assume you will find multiple employers interested in hiring you. Learn about salary negotiation now, before you go into an interview. If you're as clueless as I was when I got my first job, then you can pick up thousands of dollars for a few hours of research. Recommended reading: http://www.kalzumeus.com/2012/01/23/salary-negotiation/ [http://www.kalzumeus.com/2012/01/23/salary-negotiation/]
5Alicorn9yIf you intend to donate income acquired via job to reducing x-risk, there is a network thing [http://www.xrisknetwork.com/].
4[anonymous]9yI need to step away and think about that for a while before I decide whether or not it's a good thing and whether it would work for me. I'm pattern-matching heavily to phygishness and tithing, but flinching isn't fair without really examining the issue. Besides ideological differences (if any): is charitable donation to x-risk organizations tax deductible? Also, there is little evidence in my post history that I would donate a significant portion of income to reducing x-risk, and any attempt on my part to establish that now could just be self-interest and so should not be evidence. Thanks for linking this, though, I'm astonished that it exists because it means that people here are taking an abstract idea very seriously, which is rare and dangerous [http://lesswrong.com/lw/2yp/making_your_explicit_reasoning_trustworthy/] (see the section "Explicit reasoning is often nuts").
4James_Miller9yNetwork. Write down the names and numbers of all the adults you know in a notebook. Call each and ask if they know of anyone who might be helpful to you in a job search. Iterate until employed.
3Kaj_Sotala9ySee also the Job Search Advice thread [http://lesswrong.com/lw/626/job_search_advice/] for some suggestions.
2jsalvatier9yI've heard good things about the book What color is your parachute [http://www.amazon.com/What-Color-Your-Parachute-2011/dp/158008267X/ref=sr_1_2?ie=UTF8&qid=1338524043&sr=8-2] . The section on negotiating and asking for a raise seemed pretty useful.

I'm Xom#1203 on Diablo 3. I have a lvl 60 Barb and a hilariously OP lvl ~30 DH. I'm willnewsome on chesscube.com, ShieldMantis on FICS. I like bullet 960 but I'm okay with more traditional games too. Currently rated like 2100 on chesscube, 1600 or something on FICS. Rarely use FICS. I'd like to play people who are better than me, gives me incentive to practice. (ETA: I'll probably re-post this next Open Thread, I hope people don't mind too much.)

0[anonymous]8yMy Playstation Network name is Thomas_Bayes. One of my goals for the year was to learn Chess, but I've been very busy recently and haven't allocated much time to it.

For someone who hopes for lots of medical/bionic wonders going on the market within the next 2-3 decades, how stupid/costly it really is to start smoking a little bit today? I'm only asking because I tried it for the 1st time this week, and right now I'm sitting here smoking Dunhills, browsing stuff and listening to Alice in Chains, having a great night.

I insist on doing some light drug as I have an addictive personality that longs for a pleasant daily routine anyway - and I quit codeine this winter before it was made prescription-only (and not a moment t... (read more)

Reddit "ask me anything" coming tomorrow from the Stanford Prison Experiment guy.

Does a Tegmark Level IV type Big World completely break algorithmic probability? Is there any sort of probability that's equipped to deal with including a Big World as a possibility in your model?

-1wedrifid8yNo. Why would it?
0khafra8yCaveat: I can not math good. But, if "all mathematical structures are real," and possible universes, that must include structures like the diophantine equations isomorphic to Chaitin's Omega, and other noncomputable stuff, right? Can algorithmic probability tell me what mathematical structure generated a string, when some of the possible mathematical structures are not computable?
1[anonymous]8yPresumably you'll only ever attempt to infer from a finite prefix of such a string, which is guaranteed to have a computable description. (Worst case scenario: "the string whose first character is L, whose second character is e, ...").
1khafra8yBut, we're trying to infer the actual generator, right? If, big-worldily, the actual generator is in some set of incomputible generators, it doesn't help us at all to come up with an n-parameter generating program for the first n bits--although no computable predictor can do better. If the set of possible generators is, itself, incomputable, how do we set a probability distribution over it?
0[anonymous]8yAh, good point. Yes, that's probably a safe assumption for practical purposes.

Sean Carroll has a nice post at Cosmic Variance explaining how Occam's razor, properly interpreted, does not weigh against Many Worlds or multiverse theories. Sample quote:

When it comes to the cosmological multiverse, and also the many-worlds interpretation of quantum mechanics, many people who are ordinarily quite careful fall into a certain kind of lazy thinking. The hidden idea seems to be (although they probbly wouldn’t put it this way) that we carry around theories of the universe in a wheelbarrow, and that every different object in the theory take

... (read more)
0shminux8yAnother quote: I agree with all that (except for "and that’s it" part for MWI, given that the Born rule is still a separate assumption). Counting worlds or universes towards complexity of a quantum theory is as silly as counting species towards complexity of the theory of evolution.

I was annoyed after first hearing the Monty Hall problem. It wasn't clear that the host must always open the door, which fundamentally changes the problem. Glad to see that it's a recognized problem.

"The problem is not well-formed," Mr. Gardner said, "unless it makes clear that the host must always open an empty door and offer the switch. Otherwise, if the host is malevolent, he may open another door only when it's to his advantage to let the player switch, and the probability of being right by switching could be as low as zero." Mr.

... (read more)

Hypothetical: what do you think would happen if, in a Western country with a more or less "average" culture of ligitation - whether using trial by jury, by judge or a mix of both - all courts were allowed to judge not just the interpretation, applicability, spirit, etc, but also the constitutional merit of any law in every case (without any decision below the Supreme Court becoming precedent)?

Say, someone is arrested and brought to trial for illegal possession of firearms, but the judge just decides that the country's Constitution allows anyone ... (read more)

1[anonymous]8yThis would mess horribly with jurisprudence constante, the principle that the legal system must above all be predictable, so that people can execute contracts or choose behaviors knowing the legal implications. But since we are already, despite our claims to the otherwise, ok with using things like retroactive laws, how problematic this would be in practice is hard to ascertain.
1Multiheaded8yI know about that principle, yeah - even EY mentioned it around here somewhere - but in practice the modern judiciary branch (to my eye) is very chaotic and even corrupt anyway (enforcement of copyright is, in practice, enforcement of insanity, mass murderers can more or less walk away if their fans are threatening enough, the policy towards juvenile offenders seems to be designed for destroying their lives and making them more dangerous, etc, etc), so indeed it's largely only predictable in its consistent badness. (That applies to more or less all modern nations with reasonably independent courts.) If we've got to have such chaos - if installing a formal tyrant, etc are our only alternatives - at least it should be a chaos made of better, more commonsensical decisions; today's hierarchy of courts filters away common sense.

Unless Clippy has been brainwashing some humans, the joys of paperclipping are not as alien to the human mind as we had thought:

http://www.slate.com/articles/life/design/2012/05/the_history_of_the_paper_clip_it_was_invented_in_1899_it_hasn_t_been_improved_upon_since_.single.html

0Grognor8y-Steve Martin [http://pastebin.com/JMwJm1CR]

To those who know Sam Harris' views on free will, how do they compare to the LW solution)?

I'll get around to reading his eBook eventually, but it's not the highest priority in my backlog unless a few people say, "Yeah, read that. It's awesome."

Our key insight is a pessimistic one: this is the sort of situation which, though individuals and markets don’t handle it well, isn’t actually handled well by governments either. The fundamental mistake of statist thinking is to juxtapose the tragically, inevitably flawed response of individuals and markets to large collective-action problems like this one against the hypothetical perfection of idealized government action, without coping with the reality that government action is also tragically and inevitably flawed.

Pessimistic anarchism

Or, more simply... (read more)

2CronoDAS9yWell, as Robin Hanson said, coordination is hard.
-1timtyler9yIt seems easy enough - among close relatives. Check out eucaryotic cells, for instance. We might not all be genetically related - but we could easily become more memetically related.
0CronoDAS9yI made that same criticism to Robin Hanson, that the cells in an organism don't seem to have that kind of trouble, but within-organism coordination is clearly at least somewhat imperfect, considering that many animals do end up developing cancer during their lifetime. And I certainly have no idea what kinds of costs the body's various anti-cancer mechanisms end up imposing.
2timtyler9yYes, but we understand that cancer is a 'disposable soma' tradeoff, and that there are large, complex organisms that never get cancer. So any idea that cancer-like effects will necessarily prevent large-scale coordination seems pretty ridiculous. If empires get cancer, it will be because they are prepared to pay the costs of chopping out the occasional tumor. There are some costs to coordination, but it isn't that hard. Even ants manage it.
2CronoDAS9yWell, I know that plants don't get cancer the same way animals do, but it's even possible for insects to get cancer [http://wiki.answers.com/Q/Do_insects_get_cancer], although they don't usually live long enough to accumulate enough mutations to produce a tumor. Which complex organisms in particular did you have in mind? ("Sharks don't get cancer" is a myth invented by people who were trying to make money by selling shark cartilage.)
0timtyler9yEngineered mice [http://discovermagazine.com/2006/aug/areyouimmune/], naked mole rats [http://en.wikipedia.org/wiki/Naked_mole_rat#Resistance_to_cancer].

Is anyone else occasionally browsing LW on Android devices and finding the image based vote up, reply, parent etc. links on the comments much more difficult to hit correctly than regular links?

1Jayson_Virissimo9yThe edit button used to be almost impossible for me to push, but now seems to be working. I don't know what changed, so have no idea how to help. Sorry. BTW, what version of Android are you using?
1Risto_Saarelma9yI'm getting the problem with an Acer A500 tablet with Android 3.2.1 (the one it came with, after the updates it pushed on me) and the default browser, and on a HTC Desire with Cyanogenmod, Android 2.3.7 and Cyanogenmod's default browser. The edit button is indeed pretty much unpressable. I also can't seem to navigate to it using HTC Desire's trackpad, which can be used to highlight the other comment control links.

English is a viciously ambiguous language.

1) The preceding is not a quote, really, it's just a sentence I made up and want to analyze.

2) I think the sentence has more than an element of truth to it. While also being self-referential. This can be amusing in poetry, I guess, but I'm getting pretty sick of it right now.

3) I do not know what to do about this. I do not know how we even manage to talk to each other at all some times (!). Shades of meaning. Tones of voice running all out of sync to spoken words in order to hint at things that are better left u... (read more)

1TheOtherDave8yWhat sort of help do you want? Or, put a different way: how would you recognize something as helpful, if such help were provided?
0witzvo8yExcellent question! I didn't even think of that kind of ambiguity. I like the way you phrased it and then clarified it. Already helpful! I consider a link to something really good (and preferably brief) helpful. Or the right sentence. I would recognize something as helpful if I perceived that it would change my future behavior so that I communicate better. Better means: with less effort, more persuasive, not feeling left behind in a conversation, more accurate empathy. Heck, asking the right question, as you just did. Although I'm probably too "Socratic [http://en.wikipedia.org/wiki/Socratic_method]" for most people already. Since this is too lofty, here's a limited goal. I would like to know how to communicate with like-minded folks on this site as well as possible. E.g. I didn't know the "friends" option existed, or what it does, or that I wasn't seeing all posts until I finally clicked on preferences. I feel that there's too much unsaid wisdom here, or rather that it's spread out to the winds, so an effort to fix that would be great. (or at least I haven't found the succinct/definitive source and no I haven't read the sequences. Frankly I've been put off by wordiness and colloquially. I'll get over it eventually, I guess, because I'm picking at it already.) Also, in the rant, I noticed ambiguity as an asset and a liability and as a CPU sink. Are there alternatives that I don't know about for coping with this? And/or comments on my brace elaboration? By the way, started reading Cialdini's Influence and I judge it as helpful, though not for English per se yet. Honestly, i found HPMOR more amusing than helpful, but yes, it got me here, I suppose, so points! Edit^2: I think that this line is my most immediate pain point: Anything that helps me figure out work arounds or accept the necessity is great! PS I just noticed that your question anticipated and resolved the answer "I don't know." Very slick. But let me say "I don't know" too.
1TheOtherDave8yRegarding your most immediate pain point... if there's a way around that, I don't know it. Humans are complicated, understanding natural language requires a huge amount of pre-existing knowledge, and understanding it well enough to carry on a conversation requires building some sort of model of my interlocutor. I recognize that this is a more difficult task for some people than others, and that this is essentially unfair.
0witzvo8yWell if there's not a work around, there are coping strategies, surely. Here's what I do {idealization}: * Ignore most of it * Wait until something catches my attention. Something worth thinking about or responding to * Try to think what to say and hope/dream-on that there's a way to fit it into the conversation by the time I figured it out. (Let's see, how can I reverse what you did to me, on you?)
1TheOtherDave8yIt's not a problem for me, in that I enjoy it. Communication is a complex puzzle, and I succeed at it regularly enough to find it rewarding. But I acknowledge that that approach isn't available to everyone. That said, I think it's a skill worth mastering. As for how to master it... yeah, that's a great question. The best technique I know of is to make explicit predictions (typically private ones) of other people's reactions, and when they are wrong, pay a lot of attention to exactly how they were wrong... what that tells me about what I thought was true about that person that turns out not to be true.
0witzvo8yGood advice. Thanks! Edit: Yes, I am one of those look at the floor types. I'm trying to break that habit. Some improvement, maybe.
0witzvo8yI am beginning to Embrace Constructive ambiguity, and think that I might enjoy Communication after all. My current Stylistic plan: capitalize the letters of words where you intend the reader to notice a potential for ambiguity that you intend constructively. the capitals above are in draft status; written by instinct. I like that I and My happen to come out capital, though. e.g. ... would get the emphasis more right. (and I notice that starting sentences clearly is going to be a bit of a problem)
0witzvo8ywow. how do I give more Attention to your advice? it's great! I have not learned the explicit predictions part yet. I'm still just reacting. My cognition has been so overloaded I never had time for that and I haven't figured out where to fit it in. help? all I can predict right now is that what you right will be helpful. PS have I mentioned how much I admire Your Pseudonym? edit: hah! I wrote "right". I never Grokked Puns. (although some were clear enough I did find them funny, of course; and I've been sensitive to Irony for Forever) edit^2: {the pun was unintentional{conciously}, but awesome} PPS I sent a link to my dad about this. I think he'll get it. beyond that? also, notation matters (link?) but I'm sure you know that in your own way. so much happening by Accident these days. It seems to be coming from Communication. neat-o.
0TheOtherDave8yYou might also want to know that when you reply to yourself, as you do above, you get notifications but I don't. (I just happened to notice this in Recent Comments.) I find lists useful for keeping track of things I want to get to later but don't have time/capacity for now. In principle, I endorse the idea of typographic markers for particular meta-level emphasis -- what in speech I would use tone of speech for -- but in practice I find it distracts me more than it helps, and I pattern-match it to crankishness. I sometimes use italics for the purpose, but I find even that more and more distasteful as time goes by. This all seems arbitrary and even unfair of me, but there it is anyway. Re: pseudonym... thanks. Two lines of text without an intervening blank line will get parsed as a single line, unless you put two blank spaces at the end of the first line.
0witzvo8yPerfect! What's a list?{serious}

I want to talk about human intelligence amplification (IA), including things like brain-machine interfaces, brain/CNS mods, and perhaps eventually brute-force uploading/simulation. There are parallels between the dangers of AI and IA.

IA powerful enough to be or create an x-risk might be created before AGI. (E.g., successful IA might jump-start AGI development.) IA is likely to be created without a complete understanding of the human brain, because the task is just to modify existing brains, not to design one from scratch. We will then need FIA - the IA equ... (read more)

0vi21maobk9vp8yIA is likely to go up in big steps, but IA FOOM makes even less sense than for AI because of the human in the loop. Also, it would probably give humans a lot of improvements before solving the problem of low number of independent simultaneous attention threads. So it is not clear that any IA direction would produce a situation of single unstoppable entity. If IA simply greatly increases the thinking power of a thousand people by different amount, I would not be sure that medium-term existential threat of this field is greater than overall short-term existential threat created by something existing here and now like Sony...
0TimS8yMy sense is that explicit technological modifications of humans are already heavily concerned with the question "Will we still be human after this modification" which at least gestures at the problems you identify. It is exact the lack of this type of concern in the AI field that motivates SIAI's Friendliness activism. But the sorts of technological advances you are pointing towards seem more likely to arise in part from medical researcher methodologies, which seem more concerned with potential negative psychological and sociological effects than some other forms of technological research. In short, if every AI researcher was already worried about safety to the extent that medical researchers seem to be worried, then there would be no need for SIAI to exist - all AI researchers worrying about the Friendliness problem is what winning looks like for SIAI. Since medical researchers are already worried about these types of problems, an SIAI-equivalent is not necessary. Consider all the different medical ethics councils - which are much more powerful than their institutional equivalents in AI research.
[-][anonymous]8y 0

I just read the following comment from Ben123:

http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/6rzn

And it mentioned a chain of words I had not thought of before: "Multiplied by the reciprocal of the probability of this statement being true." At that point, I felt like I couldn't get much past "I notice I am confused." without feeling like I was making a math error. (And that it was possible the math error was in assuming the statement could be evaluated at all, which I wasn't sure of.)

In general, how should I asses... (read more)

A Transit of Venus finished a few hours ago. (100% overcast where I am, alas.)

The next one is in 2117. How many of us expect to see it?

ETA: So far, one nitpick and one Singularitarian prediction.

Personally, I expect to be dead in the usual way by the middle of this century at the latest, and even if I had myself frozen, I don't expect cryonic revival to be possible by 2117. I am not expecting a Singularity by then either. Twenty-year-olds today might reasonably have a hope of life extension of the necessary amount.

ETA2: A little sooner than that, there's ... (read more)

0Thomas8yFrom Earth it's in 2117. From a high orbit around the Earth, there will be many transits of Venus in a few next decades. We should not see that in 2117, since we should turn our Solar system to some useware before then.
[-][anonymous]8y 0

Would we lose much by not letting new, karmaless accounts post links? Active moderation is never going to be fast enough to keep stuff like this off the first page of http://lesswrong.com/comments, and it diminishes my enjoyment.

Or we could use some AI spam detection, I guess.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]8y 0

I've just realized that my information diet is well characterized as an eating disorder. Unfortunately, I'm not able to read about eating disorders (to see if their causes could plausibly result in information-diet analogs and whether their treatments can be used/adapted for information consumption disorders), because I get "sympathy attacks" (pain in my abdomen, weakness in my arms, mild nausea) when I see, hear salient descriptions of, or read about painful or very uncomfortable experiences.

I don't know what to do at this point. I'd like to hav... (read more)

[This comment is no longer endorsed by its author]Reply

To rationalize dust specks over torture, one can construct a utility function where utility of dust specks in n people is of the Zeno type, -(1-1/2^n), and the utility of torture is -2. Presumably, something else goes wrong when you do that. What is it?

7Zack_M_Davis8yAs commenter Unknown pointed out in 2008 [http://lesswrong.com/lw/n9/the_intuitions_behind_utilitarianism/hy0], there must then exist two events A and B, with B only worse than A by an arbitrarily small amount, such that no number of As could be worse than some finite number of Bs.
1shminux8yThanks, that's a valid point, pretty formal, too. I wonder if it invalidates the whole argument.
0CuSithBell8yMany find that sort of discounting to be contrary to intuition and desired results, e.g. the suffering of some particular person is more or less significant depending on how many other people are suffering in a similar enough way.
-1wedrifid8yNothing. If that is your actual preferences then that is the choice you should make. Not because you can rationalize it but because that is, in fact, what you want to do all things considered.
[-][anonymous]9y 0

Here's a little math problem that came up while I was cleaning up some decision theory math. Oh mighty LW, please solve this for me. If you fail me, I'll try MathOverflow :-)

Prove or disprove that for any real number $p$ between 0 and 1, there exist finite or infinite sequences $x\_m$ and $y\_n$ of positive reals, and a finite or infinite matrix of numbers $\\varphi\_\{mn\}$ each of which is either 0 or 1, such that:

$\\\\\\1\$\sum%20x_m=1%0A\\2)\sum%20y_n=1%0A\\3)\forall%20m,n\;\varphi_{mn}=\varphi_{nm}%0A\\4)\forall%20n\sum%20x_m\varphi_{mn}=p%0A\\5)\forall%20m\sum%20y_n\varphi_{mn}=p)

Right now I... (read more)

[This comment is no longer endorsed by its author]Reply

I just added a new post on my blog about some of my experiences with PredictionBook. It may be of interest to some here, but understand that the level of discourse is meant to be exactly in-between Less Wrong and my family and friends. It is very awkward for me to write this way and I don't really have the hang of it yet, so go easy. It is a very delicate balance between saying things imprecisely (and even knowingly wrong or incomplete) and keeping things jargon free and understandable to a wider audience.

[-][anonymous]9y 0

PhD Comics: The Problem

A graphical representation of sunk costs.

[This comment is no longer endorsed by its author]Reply