Less Wrong: Open Thread, September 2010

by matt1 min read1st Sep 2010628 comments


Personal Blog

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Rendering 500/627 comments, sorted by (show more) Highlighting new comments since Today at 4:39 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It seems to me, based on purely anecdotal experience, that people in this community are unusually prone to feeling that they're stupid if they do badly at something. Scott Adams' The Illusion of Winning might help counteract becoming too easily demotivated.

Let's say that you and I decide to play pool. We agree to play eight-ball, best of five games. Our perception is that what follows is a contest to see who will do something called winning.

But I don't see it that way. I always imagine the outcome of eight-ball to be predetermined, to about 95% certainty, based on who has practiced that specific skill the most over his lifetime. The remaining 5% is mostly luck, and playing a best of five series eliminates most of the luck too.

I've spent a ridiculous number of hours playing pool, mostly as a kid. I'm not proud of that fact. Almost any other activity would have been more useful. As a result of my wasted youth, years later I can beat 99% of the public at eight-ball. But I can't enjoy that sort of so-called victory. It doesn't feel like "winning" anything.

It feels as meaningful as if my opponent and I had kept logs of the hours we each had spent playing pool over our lifeti

... (read more)
8[anonymous]11yI suspect this is a result of the tacit assumption that "if you're not smart enough, you don't belong at LW". If most members are anything like me, this combined with the fact that they're probably used to being "the smart one" makes it extremely intimidating to post anything, and extremely de-motivational if they make a mistake. In the interests of spreading the idea that it's ok if other people are smarter than you, I'll say that I'm quite certainly one of the less intelligent members of this community. Practice and expertise tend to be domain-specific - Scott isn't any better at darts or chess after playing all that pool. Even learning things like metacognition tend not to apply outside of the specific domain you've learned it in. Intelligence is one of the only things that gives you a general problem solving/task completion ability.
1xax11yOnly if you've already defined intelligence as not domain-specific in the first place. Conversely, meta-cognition about a person's own learning processes could help them learn faster in general, which has many varied applications.
7jimrandomh11yThis is certainly true of me, but I try to make sure that the positive feeling of having identified the mistakes and improved outweighs the negative feeling of having needed the improvement. Tsuyoku Naritai [http://lesswrong.com/lw/h8/tsuyoku_naritai_i_want_to_become_stronger/]!
5Daniel_Burfoot11yI think the relative contribution of intelligence vs. practice varies substantially depending on the nature of the particular task. A key problem is to identify tasks as intelligence-dominated (the smart guy always wins) vs. practice-dominated (the experienced guy always wins). As a first observation about this problem, notice that clearly definable or objective tasks (chess, pool, basketball) tend to be practice-dominated, whereas more ambiguous tasks (leadership, writing, rationality) tend to be intelligence-dominated.
3Kaj_Sotala11yThis is true. Intelligence research has shown that intelligence is more useful for more complex tasks, see e.g. Gottfredson 2002 [http://www.udel.edu/educ/gottfredson/reprints/2002notamystery.pdf].
4[anonymous]11yI like this anecdote. I never valued intelligence relative to practice, thanks to an upbringing that focused pretty heavily on the importance of effort over talent. I'm more likely to feel behind, insufficiently knowledgeable to the point that I'm never going to catch up. I don't see why it's necessarily a cheerful observation that practice makes a big difference to performance. It just means that you'll never be able to match the person who started earlier.
2Houshalter11yMake them play some kind of simplified RPG until they realise the only achievement is how much time they put into doing mindless repetitive tasks.

Make them play some kind of simplified RPG until they realise the only achievement is how much time they put into doing mindless repetitive tasks.

I imagine lots of kids play Farmville already.

4Kaj_Sotala11yThose games don't really improve any sort of skill, though, and neither does anyone expect them to. To teach kids this, you need a game where you as a player pretty much never stop improving, so that having spent more hours on the game actually means you'll beat anyone who has spent less. Go might work.
6rwallace11yThere are schools that teach Go intensively from an early age, so that a 10-year-old student from one of those schools is already far better than a casual player like me will ever be, and it just keeps going up from there. People don't seem to get tired of it. Every time I contemplate that, I wish all the talent thus spent, could be spent instead on schools providing similarly intensive teaching in something useful like science and engineering. What could be accomplished if you taught a few thousand smart kids to be dan-grade scientists by age 10 and kept going from there? I think it would be worth finding out.
3Christian_Szegedy11yI agree with you. I also think that there are several reasons for that: First that competitive games are (intellectual or physical sports) easier to select and train for, since the objective function is much clearer. The other reason is more cultural: if you train your child for something more useful like science or mathematics, then people will say: "Poor kid, do you try to make a freak out of him? Why can't he have a childhood like anyone else?" Traditionally, there is much less opposition against music, art or sport training. Perhaps they are viewed as "fun activities." Thirdly, it also seems that academic success is the function of more variables: communication skills, motivation, perspective, taste, wisdom, luck etc. So early training will result in much less head start than in a more constrained area like sports or music, where it is almost mandatory for success (age of 10 (even 6) are almost too late in some of those areas to begin seriously)
3NihilCredo11yA somewhat related, impactful graph [http://infobeautiful2.s3.amazonaws.com/goggle_boxes.png]. Of course, human effort and interest is far from perfectly fungible. But your broader point retains a lot of validity.
5Sniffnoy11yThere's a large difference between the "leveling up" in such games, where you gain new in-game capabilities, and actually getting better, where your in-game capabilities stay the same but you learn to use them more effectively. ETA: I guess perhaps a better way of saying it is, there's a large difference between the causal chains time->winning, and time->skill->winning.
1Jonathan_Graehl11yI'm guilty of a sort of fixation on IQ (not actual scores or measurements of it). I have an unhealthy interest in food, drugs and exercises (physical and mental) that are purported to give some incremental improvement. I see this in quite a few folks here as well. To actually accomplish something, more important than these incremental IQ differences are: effective high-level planning and strategy, practice, time actually spent trying, finding the right collaborators, etc. I started playing around with some IQ-test-like games [http://www.cambridgebrainsciences.com/] lately and was initially a little let down with how low my performance (percentile, not absolute) was on some tasks at first. I now believe that these tasks are quite specifically-trainable (after a few tries, I may improve suddenly, but after that I can, but choose not to, steadily increase my performance with work), and that the population actually includes quite a few well-practiced high-achievers. At least, I prefer to console myself with such thoughts. But, seeing myself scored as not-so-smart in some ways, I started to wonder what difference it makes to earn a gold star that says you compute faster than others, if you don't actually do anything with it. Most people probably grow out of such rewards at a younger age than I did.
1Wei_Dai11yI'm not sure I agree with that. In what areas do you see overvalue of intelligence relative to practice and why do you think there really is overvalue in those areas? I've noticed for example that people's abilities to make good comments on LW do not seem to improve much with practice and feedback from votes (beyond maybe the first few weeks or so). Does this view represent an overvalue of intelligence?
7Kaj_Sotala11yI should probably note that my overvaluing of intelligence is more of an alief [http://en.wikipedia.org/wiki/Alief_%28belief%29] than a belief. Mostly it shows up if I'm unable to master (or at least get a basic proficiency in) a topic as fast as I'd like to. For instance, on some types of math problems I get quickly demotivated and feel that I'm not smart enough for them, when the actual problem is that I haven't had enough practice on them. This is despite the intellectual knowledge that I could master them, if I just had a bit more practice. That sounds about right, though I would note that there's a huge amount of background knowledge that you need to absorb on LW. Not just raw facts, either, but ways of thinking. The lack of improvement might partially be because some people have absorbed that knowledge when they start posting and some haven't, and absorbing it takes such a long time that the improvement happens too slowly to notice.
3wedrifid11yThat's interesting. I hadn't got that impression but I haven't looked too closely at such trends either. There are a few people whose comments have improved dramatically but the difference seems to be social development and and not necessarily their rational thinking - so perhaps you have a specific kind of improvement in mind. I'm interested in any further observations on the topic by yourself or others.

An Alternative To "Recent Comments"

For those who may be having trouble keeping up with "Recent Comments" or finding the interface a bit plain, I've written a Greasemonkey script to make it easier/prettier. Here is a screenshot.

Explanation of features:

  • loads and threads up to 400 most recent comments on one screen
  • use [↑] and [↓] to mark favored/disfavored authors
  • comments are color coded based on author/points (pink) and recency (yellow)
  • replies to you are outlined in red
  • hover over [+] to view single collapsed comment
  • hover over/click [^] to highlight/scroll to parent comment
  • marks comments read (grey) based on scrolling
  • shows only new/unread comments upon refresh
  • date/time are converted to your local time zone
  • click comment date/time for permalink

To install, first get Greasemonkey, then click here. Once that's done, use this link to get to the reader interface.

ETA: I've placed the script is in the public domain. Chrome is not supported.

6Wei_Dai11yHere's something else I wrote a while ago: a script that gives all the comments and posts of a user on one page, so you can save them to a file or search more easily. You don't need Greasemonkey for this one, just visit http://www.ibiblio.org/weidai/lesswrong_user.php [http://www.ibiblio.org/weidai/lesswrong_user.php] I put in a 1-hour cache to reduce server load, so you may not see the user's latest work.
0NihilCredo11yMay I suggest submitting the script to userscripts.org? It will make it easier for future LessWrong readers to find it, as well as detectable by Greasefire [https://addons.mozilla.org/en-US/firefox/addon/8352/].
0ata11yNice! Thanks. Edit: "shows only new/unread comments upon refresh" — how does it determine readness?
0Wei_Dai11yAny comment that has been scrolled off the screen for 5 seconds is considered read. (If you scroll back, you can see that the text and border have turn from black to gray.) If you scroll to the bottom and stay there for 5 seconds, all comments are marked read.
0andreas11yThanks for coding this! Currently, the script does not work in Chrome (which supports Greasemonkey out of the box).
2Wei_Dai11yFrom http://dev.chromium.org/developers/design-documents/user-scripts [http://dev.chromium.org/developers/design-documents/user-scripts] My script uses 4 out of these 6 features, and also cross-domain GM_xmlhttpRequest (the comments are actually loaded from a PHP script hosted elsewhere, because LW doesn't seem to provide a way to grab 400 comments at once), so it's going to have to stay Firefox-only for the time being. Oh, in case anyone developing LW is reading this, I release my script into the public domain, so feel free to incorporate the features into LW itself.
0Morendil11yWould you consider making display of author names and points a toggle and hidden by default, à la Anti-Kibitzer?
2Wei_Dai11yI've added some code to disable the author/points-based coloring when Anti-Kibitzer is turned on in your account preferences. (Names and points are already hidden by the Anti-Kibitzer.) Here is version 1.0.1 [http://www.ibiblio.org/weidai/lesswrong_comments_reade.user.js]. More feature requests or bug reports are welcome.
0[anonymous]11ySounds fantastic! Err... but the link is broken.

Not sure what the current state of this issue is, apologies if it's somehow moot.

I would like to say that I strongly feel Roko's comments and contributions (save one) should be restored to the site. Yes, I'm aware that he deleted them himself, but it seems to me that he acted hastefully and did more harm to the site than he probably meant to. With his permission (I'm assuming someone can contact him), I think his comments should be restored by an admin.

Since he was such a heavy contributor, and his comments abound(ed) on the sequences (particularly Metaethics, if memory serves), it seems that a large chunk of important discussion is now full of holes. To me this feels like a big loss. I feel lucky to have made it through the sequences before his egress, and I think future readers might feel left out accordingly.

So this is my vote that, if possible, we should proactively try to restore his contributions up to the ones triggering his departure.

6Vladimir_Nesov11yHe did give a permission to restore the posts (I didn't ask about comments), when I contacted him originally. There remains the issue of someone being technically able to restore these posts.
6matt11yWe have the technical ability, but it's not easy. We wouldn't do it without Roko's and Eliezer's consent, and a lot of support for the idea. (I wouldn't expect Eliezer to consent to restoring the last couple of days of posts/comments, but we could restore everything else.)
5wedrifid11yIt occurs to me that there is a call for someone unaffiliated to maintain a (scraped) backup of everything that is posted in order to prevent such losses in the future.
2Douglas_Knight11ySurely it would be easy to restore just Roko's posts, leaving his comments dead. Also, if you don't end up restoring them, it's rather awkward that he's in the top contributors list, with a practically dead link.
2matt11yIt's doable. Are you now talking to the wrong person? [ETA: Sorry - reading that back it was probably rude - I meant to say something closer to "It's doable, but I still need Eliezer's okay before I'll do anything."]
7Eliezer Yudkowsky11yOkay granted. I also think this would be a good idea. Actually, I'd be against having an easy way to delete all contributions in general, it's too easy to wreck things that way.
9wedrifid11yAre you saying that the only person who should be conveniently able to remove other people's contributions is you? People's comments are their own. It is unreasonable to leave them up if they choose not to. Fortunately, things that have been posted on the internet tend to be hard to destroy. Archives can be created and references made to material that has been removed (for example, see RationalWiki [http://rationalwiki.org/wiki/LessWrong#cite_note-3]). This means that a blogger can not expect to be able to remove their words from the public record even though they can certainly stop publishing it themselves, removing their ongoing implied support of those words. I actually do support [http://lesswrong.com/lw/2nz/less_wrong_open_thread_september_2010/2ju8?c=1] keeping an archive of contributions and it would be convenient if LW had a way to easily restore lost content. It would have to be in a way that was either anonymized ("deleted user"?) or gave some clear indication that the post is by "past-Roko", or "archived-Roko" rather than pretending that it is by the author himself, in the present tense. That is, it would acknowledge the futility of deleting information on the internet but maintain common courtesy to the author. There is no need to disempower the author ourselves by removing control over their own account when the very nature of the internet makes the deleting efforts futile anyway.
4XiXiDu11yWhat is necessary is just that EY thinks about a way how to tell people why something had to be deleted without referring to what has been deleted in detail and why they should trust him on that. I see that freedom of speech has to end somewhere. Are we going to publish detailed blueprints for bio weapons? No. I just don't see how EY wants to accomplish that as in the case of the Roko incident you cannot even talk about what has been deleted in a abstract way. Convince me not to spread the original posts and comments as much as I can? How are you going to do that? I already posted another comment yesterday with the files that I deleted again after thinking about it. This is just too far and fuzzy for me to not play with the content in question without thinking twice. What I mean is that I personally have no problem with censorship, if I can see why it had to be done.
8khafra11yI've been thinking about it by moving domains: Imagine that, instead of communicating by electromagnetic or sound waves, we encoded information into the DNA of custom microbes and exchanged them. Would there be any safe way to talk about even the specifics of why a certain bioweapon couldn't be discussed? I don't think there is. At some point in weaponized conversation, there's a binary choice between inflicting it on people and censoring it.
8Alicorn11yLike the descoladores!
0khafra11yHah, I didn't realize someone else had already imagined it. Generalizing from multiple, independently-generated fictional evidence?
3XiXiDu11yAwesome reply, thanks :-) Didn't know about this [http://en.wikipedia.org/wiki/Concepts_in_the_Ender%27s_Game_series#Descoladores] either, thanks Alicorn [http://lesswrong.com/lw/2nz/less_wrong_open_thread_september_2010/2p0f?c=1]. I wonder how the SIAI is going to resolve that problem if it caused nightmares inside the SIAI itself. Is EY going to solve it all by himself? If he was going to discuss it, then with whom, since he doesn't know who's strong enough beforehand? That's just crazy. Time will end soon [http://arxiv.org/abs/1009.4698] anyway, so why worry I guess. Bye.
0David_Gerard11yAnd not just the example you cite. RationalWiki has written an entire MediaWiki extension [http://rationalwiki.org/wiki/User:Capturebot2] specifically for the purpose of saving snapshots of Web pages, as people trying to cover their tracks happens a lot on some sites we run regular news pages on (Conservapedia [http://rationalwiki.org/wiki/Conservapedia:What_is_going_on_at_CP%3F], Citizendium [http://rationalwiki.org/wiki/RationalWiki:What_is_going_on_at_Citizendium%3F]). Memory holing gets people really annoyed, because it's socially extremely rude. It's the same problem as editing a post to make a commentator look foolish. There may be general good reasons for memory holing, but it must be done transparently - there is too much precedent for presuming bad faith unless otherwise proven.
0gwern11ySeems like a heavy-weight solution. I'd just use http://webcitation.org/ [http://webcitation.org/] (probably combined with my little program, archiver [http://hackage.haskell.org/package/archiver]).
0David_Gerard11yA simple mechanism to put the saved evidence in the same place as the assertions concerning it, rather than out in the cloud, is not onerous in practice. Mind you, most of the disk load for RW is the images ...

I had a top-level post which touched on an apparently-forbidden idea downvoted to a net of around -3 and then deleted. This left my karma pinned (?) at 0 for a few months. I am not sure of the reasons for this, but suspect that the forbidden idea was partly to blame.

My karma is now back up to where I could make a top-level post. Do people think that a discussion forum on the moderation and deletion policies would be beneficial? I do, even if we all had to do silly dances to avoid mentioning the specifics of any forbidden idea(s). In my opinion, such dances are both silly and unjustified; but I promise that I'd do them and encourage them if I made such a post, out of respect for the evident opinions of others, and for the asymmetrical (though not one-sided) nature of the alleged danger.

I would not be offended if someone else "took the idea" and made such a post. I also wouldn't mind if the consensus is that such a post is not warranted. So, what do you think?

Do people think that a discussion forum on the moderation and deletion policies would be beneficial?

I would like to see a top-level post on moderation policy. But I would like for it to be written by someone with moderation authority. If there are special rules for discussing moderation, they can be spelled out in the post and commenters can abide by them.

As a newcomer here, I am completely mystified by the dark hints of a forbidden topic. Every hypothesis I can come up with as to why a topic might be forbidden founders when I try to reconcile with the fact that the people doing the forbidding are not stupid.

Self-censorship to protect our own mental health? Stupid. Secrecy as a counter-intelligence measure, to safeguard the fact that we possess some counter-measure capability? Stupid. Secrecy simply because being a member of a secret society is cool? Stupid, but perhaps not stupid enough to be ruled out. On the other hand, I am sure that I haven't thought of every possible explanation.

It strikes me as perfectly reasonable if certain topics are forbidden because discussion of such topics has historically been unproductive, has led to flame wars, etc. I have been wandering around the internet long enough to understand and even appreciate somewhat arbitrary, publicly announced moderation policies. But arbitrary and secret policies are a prescription for resentment and for time wasted discussing moderation policies.

Edit: typo correction - insert missing words

6wnoise11yMy gloss on it is that this is at best a minor part, though it figures in. The topic is an idea that has horrific implications that are supposedly made more likely the more one thinks about it. Thinking about it in order to figure out what it may be is a bad idea because you may come up with something else. And if the horrific is horrific enough, even a small rise in the probability of it happening would be very bad in expectation. More explaining why many won't think it dangerous at all. This doesn't directly point anything out, but any details do narrow the search-space: V fnl fhccbfrqyl orpnhfr lbh unir gb ohl va gb fbzr qrpvqrqyl aba-znvafgernz vqrnf gung ner pbzzba qbtzn urer. I personally don't buy this, and think the censorship is an overblown reaction. Accepting it is definitely not crazy, however, especially given the stakes, and I'm willing to self-censor to some degree, even though I hate the heavy-handed response.
9cata11yAnother perspective: I read the forbidden idea, understood it, but I have no sense of danger because (like the majority of humans) I don't really live my life in a way that's consistent with all the implications of my conscious rational beliefs. Even though it sounded like a convincing chain of reasoning to me, I find it difficult to have a personal emotional reaction or change my lifestyle based on what seem to be extremely abstract threats. I think only people who are very committed rationalists would find that there are topics like this which could be mental health risks. Of course, that may include much of the LW population.

How about an informed consent form:

  • (1) I know that the SIAI mission is vitally important.
  • (2) If we blow it, the universe could be paved with paper clips.
  • (3) Or worse.
  • (4) I hereby certify that points 1 & 2 do not give me nightmares.
  • (5) I accept that if point 3 gives me nightmares that points 1 and 2 did not give me, then I probably should not be working on FAI and should instead go find a cure for AIDS or something.
1Snowyowl11yI feel you should detail point (1) a bit more (explain in more detail what the SIAI intends to do), but I agree with the principle. Upvoted.
1wedrifid11yI like it! Although 5 could be easily replaced by "Go earn a lot of money in a startup, never think about FAI again but still donate money to SIAI because you remember that you have some good reason to that you don't want to think about explicitly."
7Kaj_Sotala11yI read the idea, but it seemed to have basically the same flaw as Pascal's wager does. On that ground alone it seemed like it shouldn't be a mental risk to anyone, but it could be that I missed some part of the argument. (Didn't save the post.)
5timtyler11yMy analysis was that it described a real danger. Not a topic worth banning, of course - but not as worthless a danger as the one that arises in Pascal's wager.
3homunq11yI think it's safe to tell you that your second two hypotheses are definitely not on the right track.

If there's just one topic that's banned, then no. If it's increased to 2 topics - and "No riddle theory" is one I hadn't heard before - then maybe. Moderation and deletion is very rare here.

I would like moderation or deletion to include sending an email to the affected person - but this relies on the user giving a good email address at registration.

8Emile11yI'm pretty sure that "riddle theory" is a reference to Roko's post, not a new banned topic.
5homunq11yMy registration email is good, and I received no such email. I can also be reached under the same user name using English wikipedia's "contact user" function (which connects to the same email.) Suggestions like your email idea would be the main purpose of having the discussion (here or on a top-level post). I don't think that some short-lived chatter would change a strongly-held belief, and I have no desire nor capability of unseating the benevolent-dictator-for-life. However, I think that any partial steps towards epistemic glasnost, such as an email to deleted post authors or at least their ability to view the responses to their own deleted post, would be helpful.
6xamdam11yYes. I think that lack of policy 1) reflects poorly on the objectivity of moderators, even if in appearance only 2) diverts too much energy into nonproductive discussions.
4Relsqui11yAs a moderator of a moderately large social community, I would like to note that moderator objectivity is not always the most effective way to reach the desired outcome (an enjoyable, productive community). Yes, we've compiled a list of specific actions that will result in warnings, bans, and so forth, but someone will always be able to think of a way to be an asshole which isn't yet on our list--or which doesn't quite match the way we worded it--or whatever. To do our jobs well, we need to be able to use our judgment (which is the criterion for which we were selected as moderators). This is not to say that I wouldn't like to see a list of guidelines for acceptable and unacceptable LW posts. But I respect the need for some flexibility on the editing side.
0NancyLebovitz11yAny thoughts about whether there are differences between communities with a lot of specific rules and those with a more general "be excellent to each other" standard?
4Relsqui11yThat's a really good question; it makes me want to do actual experiments with social communities, which I'm not sure how you'd set up. Failing that, here are some ideas about what might happen: Moderators of a very strictly rule-based community might easily find themselves in a walled garden [http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/] situation just because their hands are tied. (This is the problem we had in the one I mentioned, before we made a conscious decision to be more flexible.) If someone behaves poorly, they have no justification to wield to eject that person. In mild cases they'll tolerate it; in major cases, they'll make an addition to the rules to cover the new infraction. Over time the rules become an unwieldy tome, intimidating users who want to behave well, reducing the number of people who actually read them, and increasing the chance of accidental infractions. Otherwise-useful participants who make a slip get a pass, leading to cries of favoritism from users who'd had the rules brought down on them before--or else they don't, and the community loses good members. This suggests a corollary of my earlier admonition for flexibility: What written rules there are should be brief and digestible, or at least accompanied by a summary. You can see this transition by comparing the long form [http://www.xkcdb.com/rules/] of one community's rules, complete with CSS and anchors that let you link to a specific infraction, and the short form [http://www.xkcdb.com/rules/simple] which is used to give new people a general idea of what's okay and not okay. The potential flaw in the "be excellent to each other" standard is disagreement about what's excellent--either amongst the moderators, or between the moderators and the community. For this reason, I'd expect it to work better in smaller communities with fewer of either. (This suggests another corollary--smaller communities need fewer written rules--which I suspect is true but with less confide
5[anonymous]11yA minute in Konkvistador's mind:
1thomblake11yI do have access to the forbidden post, and have no qualms about sharing it privately. I actually sought it out actively after I heard about the debacle, and was very disappointed when I finally got a copy to find that it was a post that I had already read and dismissed. I don't think there's anything there, and I know what people think is there, and it lowered my estimation of the people who took it seriously, especially given the mean things Eliezer said to Roko.
5[anonymous]11yCan I haz evil soul crushing idea plz? [http://24.media.tumblr.com/tumblr_kyzmp9VxAD1qzmowao1_500.jpg] But to be serious, yes if I find the idea is foolish, the people who take it seriously, this reduces my optimism as well, just as much as malice on the part of the Lesswrong staff or just plain real dark secrets since I take clippy to be a serious and very scary threat (I hope you don't take too much offence clippy you are a wonderful poster) . I should have stated that too. But to be honest it would be much less fun knowing the evil soul crushing self-fulfilling prophecy (tm), the situation around it is hilarious. What really catches my attention however is the thought experiment of how exactly one is supposed to quarantine a very very dangerous idea. Since in the space of all possible ideas, I'm quite sure there are a few that could prove very toxic to humans. The LW member that take it seriously are doing a horrible job of it.
0NancyLebovitz11yUpvoted for the cat picture.
0thomblake11yIndeed, in the classic story, it was an idea whose time had come, and there was no effective means of quarantining it. And when it comes to ideas that have hit the light of day, there are always going to be those of us who hate censorship more than death.
4Airedale11yI think such discussion wouldn't necessarily warrant its own top-level post, but I think it would fit well in a new Meta thread. I have been meaning to post such a thread for a while, since there are also a couple of meta topics I would like to discuss, but I haven't gotten around to it.
2Emile11yI don't. Possible downsides are flame wars among people who support different types of moderation policies (and there are bound to be some - self-styled rebels who pride themselves in challenging the status quo and going against groupthink are not rare on the net), and I don't see any possible upsides. Having a Benevolent Dictator For Life works quite well. See this [http://meatballwiki.org/wiki/BenevolentDictator] on Meatball Wiki, that has quite a few pages on organization of Online Communities.

I don't want a revolution, and don't believe I'll change the mind of somebody committed not to thinking too deeply about something. I just want some marginal changes.

I think Roko got a pretty clear explanation of why his post was deleted. I don't think I did. I think everyone should. I suspect there may be others like me.

I also think that there should be public ground rules as to what is safe. I think it is possible to state such rules so that they are relatively clear to anyone who has stepped past them, somewhat informative to those who haven't, and not particularly inviting of experimentation. I think that the presence of such ground rules would allow some discussion as to the danger or non-danger of the forbidden idea and/or as to the effectiveness or ineffectiveness of supressing it. Since I believe that the truth is "non-danger" and "ineffectiveness", and the truth will tend to win the argument over time, I think that would be a good thing.

3timtyler11yThe second rule of Less Wrong is, you DO NOT talk about Forbidden Topics [http://www.youtube.com/watch?v=agi8PUmlAKU].
3Emile11yIt's probably better to solve this by private conversation with Eliezer, than by trying to drum up support in an open thread. Too much meta discussion is bad for a community.
1homunq11yThe thing I'm trying to drum up support for is an incremental change in current policy; for instance, a safe and useful version of the policy being publicly available. I believe that's possible, and I believe it is more appropriate to discuss this in public. (Actually, since I've been making noise about this, and since I've promised not to reveal it, I now know the secret. No, I won't tell you, I promised that. I won't even tell who told me, even though I didn't promise not to, because they'd just get too many requests to reveal it. But I can say that I don't believe in it, and also that I think [though others might disagree] that a public policy could be crafted which dealt with the issue without exacerbating it, even if it were real.)
1JGWeissman11yNormally yes, but this case involves a potentially adversarial agent with intelligence and optimizing power vastly superior to your own, and which cares about your epistemic state as well as your actions.
5homunq11yLook, my post addressed these issues, and I'd be happy to discuss them further, if the ground rules were clear. Right now, we're not having that discussion; we're talking about whether that discussion is desirable, and if so, how to make it possible. I think that the truth will out; if you're right, you'll probably win the discussion. So although we disagree on danger, we should agree on discussing danger within some well-defined ground rules which are comprehensibly summarized in some safe form.
0[anonymous]11yHell? That's it?
1[anonymous]11yThanks. More reason to waste less time here. I have been reading OB and LW from about a month of OB's founding, but this site has been slipping for over a year now. I don't even know what specifically is being discussed; not even being able to mention the subject matter of the banned post, and having secret rules, is outstandingly stupid. Maybe I'll come back again in a bit to see if the "moderators" have grown up.
6NihilCredo11yAs a rather new reader, my impression has been that LW suffers from a moderate case of what in the less savory corners of the Internet would be known as CJS (circle-jerking syndrome). At the same time, if one is willing to play around this aspect (which is as easy as avoiding certain threads and comment trees), there are discussion possibilities that, to the best of my knowledge, are not matched anywhere else - specifically, the combination of a low effort-barrier to entry, a high average thought-to-post ratio, and a decent community size.

I made this site last month: areyou1in1000000.com

Neuroskeptic's Help, I'm Being Regressed to the Mean is the clearest explanation of regression to the mean that I've seen so far.

9Snowyowl11yWow. I thought I understood regression to the mean already, but the "correlation between X and Y-X" is so much simpler and clearer than any explanation I could give.
2Vladimir_M11yWhen I tried making sense of this topic in the context of the controversies over IQ heritability, the best reference I found was this old paper: Brian Mackenzie, Fallacious use of regression effects in the I.Q. controversy, Australian Psychologist 15(3):369-384, 1980 Unfortunately, the paper failed to achieve any significant impact, probably because it was published in a low-key journal long before Google, and it's now languishing in complete obscurity. I considered contacting the author to ask if it could be put for open access online -- it would be definitely worth it -- but I was unable to find any contact information; it seems like he retired long ago. There is also another paper with a pretty good exposition of this problem, which seems to be a minor classic, and is still cited occasionally: Lita Furby, Interpreting regression toward the mean in developmental research, Developmental Psychology, 8(2):172-179, 1973

Is he the same guy whose trivial error inspired this post?


If so, just how smart do you have to be in order to be SIAI material?

Ridiculously smart, as I'm sure you can guess. Of note is that you're smart and yet just made the fundamental attribution error.

I'm interested in video game design and game design in general, and also in raising the rationality waterline. I'd like to combine these two interests: to create a rationality-focused game that is entertaining or interesting enough to become popular outside our clique, but that can also effectively teach a genuinely useful skill to players.

I imagine that it would consist of one or more problems which the player would have to be rational in some particular way to solve. The problem has to be:

  • Interesting: The prospect of having to tackle the problem should excite the player. Very abstract or dry problems would not work; very low-interaction problems wouldn't work either, even if cleverly presented (i.e. you could do Newcomb's problem as a game with plenty of lovely art and window dressing... but the game itself would still only be a single binary choice, which would quickly bore the player).

  • Dramatic in outcome: The difference between success and failure should be great. A problem in which being rational gets you 10 points but acting typically gets you 8 points would not work; the advantage of applying rationality needs to be very noticeable.

  • Not rigged (or not obviously so): The

... (read more)

RPGs (and roguelikes) can involve a lot of optimization/powergaming; the problem is that powergaming could be called rational already. You could

  • explicitly make optimization a part of game's storyline (as opposed to it being unnecessary (usually games want you to satisfice, not maximize) and in conflict with the story)
  • create some situations where the obvious rules-of-thumb (gather strongest items, etc.) don't apply - make the player shut up and multiply
  • create situations in which the real goal is not obvious (e. g. it seems like you should power up as always, but the best choice is to focus on something else)

Sorry if this isn't very fleshed-out, just a possible direction.

Here's an idea I've had for a while: Make it seem, at first, like a regular RPG, but here's the kicker -- the mystical, magic potions don't actually do anything that's indistinguishable from chance.

(For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you. If you think this would be too obvious, rot13: In the game Earthbound, bar vgrz lbh trg vf gur Pnfrl Wbarf ong, naq vgf fgngf fnl gung vg'f ernyyl cbjreshy, ohg vg pna gnxr lbh n ybat gvzr gb ernyvmr gung vg uvgf fb eneryl gb or hfryrff.)

Set it in an environment like 17th-century England where you have access to the chemicals and astronomical observations they did (but give them fake names to avoid tipping off users, e.g., metallia instead of mercury/quicksilver), and are in the presence of a lot of thinkers working off of astrological and alchemical theories. Some would suggest stupid experiments ("extract aurum from urine -- they're both yellow!") while others would have better ideas.

To advance, you have to figure out the laws governing these things (which would be isomorphic to real science) and put this knowledge to practical use. The insights that had to be made back then are far removed from the clean scientific laws we have now, so it would be tough.

It would take a lot of work to e.g. make it fun to discover how to use stars to navigate, but I'm sure it could be done.

For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you.

What if instead of being useless (by having an additional cancelling effect), magical potions etc. had no effect at all? If HP isn't explicitly stated, you can make the player feel like he's regaining health (e.g. by some visual cues), but in reality he'd die just as often.

9steven046111yI think in many types of game there's an implicit convention that they're only going to be fun if you follow the obvious strategies on auto-pilot and don't optimize too much or try to behave in ways that would make sense in the real world, and breaking this convention without explicitly labeling the game as competitive or a rationality test will mostly just be annoying. The idea of having a game resemble real-world science is a good one and not one that as far as I know has ever been done anywhere near as well as seems possible.
1SilasBarta11yGood point. I guess the game's labeling system shouldn't deceive you like that, but it would need to have characters that promote non-functioning technology, after some warning that e.g. not everyone is reliable, that these people aren't the tutorial.
9DSimon11yBest I think would be if the warning came implicitly as part of the game, and a little ways into it. For example: The player sees one NPC Alex warn another NPC Joe that failing to drink the Potion of Feather Fall will mean he's at risk of falling off a ledge and dying. Joe accepts the advice and drinks it. Soon after, Joe accidentally falls off a ledge and dies. Alex attempts to rationalize this result away, and (as subtly as possible) shrugs off any attempts by the player to follow conversational paths that would encourage testing the potion. Player hopefully then goes "Huh. I guess maybe I can't trust what NPCs say about potions" without feeling like the game has shoved the answer at them, or that the NPCs are unrealistically bad at figuring stuff out.
2SilasBarta11yExactly -- that's the kind of thing I had in mind: the player has to navigate through rationalizations and be able to throw out unreliable claims against bold attempts to protect it from being proven wrong. So is this game idea something feasible and which meets your criteria?
3DSimon11yI think so, actually. When I start implementation, I'll probably use an Interactive Fiction engine as another person on this thread suggested, because (a) it makes implementation a lot easier and (b) I've enjoyed a lot of IF but I haven't ever made one of my own. That would imply removing a fair amount of the RPG-ness in your original suggestion, but the basic ideas would still stand. I'm also considering changing the setting to make it an alien world which just happens to be very much like 17th century England except filled with humorous Rubber Forehead Aliens [http://tvtropes.org/pmwiki/pmwiki.php/Main/RubberForeheadAliens]; maybe the game could be called Standing On The Eyestalks Of Giants. On the particular criteria: * Interesting: I think the setting and the (hopefully generated) buzz would build enough initial interest to carry the player through the first frustrating parts where things don't seem to work as they are used to. Once they get the idea that they're playing as something like an alien Newton, that ought to push up the interest curve again a fair amount. * Not (too) allegorical: Everybody loves making fun of alchemists. Now that I think of it, though, maybe I want to make sure the game is still allegorical enough to modern-day issues so that it doesn't encourage hindsight bias [http://lesswrong.com/lw/il/hindsight_bias/]. * Dramatic/Surprising: IF has some advantages here in that there's an expectation already in place that effects will be described with sentences instead of raw HP numbers and the like. It should be possible to hit the balance where being rational and figuring things out gets the player significant benefits (Dramatic) , but the broken theories being used by the alien alchemists and astrologists are convincing enough to fool the player at first into thinking certain issues are non-puzzles (Surprising). * Not rigged: Assuming the interface for modelling the game wor
1SilasBarta11yThanks, I'm glad I was able to give you the kind of idea you were looking for, and that someone is going to try to implement this idea. Good -- that's what I was trying to get at. For example, you would want a completely different night sky; you don't want the gamer to be able to spot the Big Dipper (or Southern Cross for our Aussie friends) and then be able to use existing ephemeris (ephemeral?) data. The planet should have a different tilt, or perhaps be the moon of another planet, so the player can't just say, "LOL, I know the heliocentric model, my planet is orbiting the sun, problem solved!" Different magnetic field too, so they can't just say, "lol, make a compass, it points north". I'm skeptical, though, about how well text-based IF can accomplish this -- the text-only interface is really constraining, and would have to tell the user all of the salient elements explicitly. I would be glad to help on the project in any way I can, though I'm still learning complex programming myself. Also, something to motivate the storyline would be like: You need to come up with better cannonballs for the navy (i.e. have to identify what increases a metal's yield energy). Or come up with a way of detecting counterfeit coins.
0Mass_Driver11yLet me know if you would like help with the writing, either in terms of brainstorming, mapping the flow, or even just copyediting.
5CronoDAS11yOr you could just go look up the correct answers on gamefaqs.com.
2JGWeissman11ySo the game should generate different sets of fake names for each time it is run, and have some variance in the forms of clues and which NPC's give them.
4CronoDAS11yEver played Nethack? ;)
9Emile11yNote also the Wiki page [http://wiki.lesswrong.com/wiki/The_Less_Wrong_Video_Game], with links to previous threads (I just discovered it, and I don't think I had noticed the previous threads. This one seems better!) One interesting game topic could be building an AI. Make it look like a nice and cutesy adventure game, with possibly some little puzzles, but once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/siny smiley faces/tiny copies of Eliezer Yudkowsky [http://lesswrong.com/lw/4g/eliezer_yudkowsky_facts/383?c=1]. That's more about SIAI propaganda than rationality though. One interesting thing would be to exploit the conventions of video games but make actual winning require to see through those conventions. For example, have a score, and certain actions give you points, with nice shiny feedbacks and satisfying "shling!" sounds, but some actions are vitally important but not rewarded by any feedback. For example (to keep in the "build an AI" example), say you can hire scientists, and the scientists' profile page lists plenty of impressive certifications (stats like "experiment design", "analysis", "public speaking", etc.), and some filler text about what they did their thesis and boring stuff like that (think: stats get big Icons, and are at the top, filler text looks like boring background filler text). And once you hired the scientists, you get various bonuses (money, prestige points, experiments), but the only of those factors that's of any importance at the end of the game is whether the scientist is "not stupid", and the only way to tell that is from various tell-tale signs for "stupid" in the "boring" filler texts - For example things like (also) having a degree in theology, or having published a paper on homeopathy ... stuff that would indeed be a bad sign for a scientist, but that nothing in the game ever tells you is bad. So basically the idea would be that the rules of the game you'
4DSimon11yI think this is a great idea. Gamers know lots of things about video games, and they know them very thoroughly. They're used to games that follow these conventions, and they're also (lately) used to games that deliberately avert or meta-comment on these conventions for effect (i.e. Achievement Unlocked [http://armorgames.com/play/2893/achievement-unlocked]), but there aren't too many games I know of that set up convincingly normal conventions only to reveal that the player's understanding is flawed. Eternal Darkness did a few things in this area. For example, if your character's sanity level was low, you the player might start having unexpected troubles with the interface, i.e. the game would refuse to save on the grounds that "It's not safe to save here", the game would pretend that it was just a demo of the full game, the game would try to convince you that you accidentally muted the television (though the screaming sound effects would still continue), and so on. It's too bad that those effects, fun as they were, were (a) very strongly telegraphed beforehand, and (b) used only for momentary hallucinations, not to indicate that the original understanding the player had was actually the incorrect one.

The problem is that, simply put, such games generally fail on the "fun" meter.

There is a game called "The Void," which begins with the player dying and going to a limbo like place ("The Void"). The game basically consists of you learning the rules of the Void and figuring out how to survive. At first it looks like a first person shooter, but if you play it as a first person shooter you will lose. Then it sort of looks like an RPG. If you play it as an RPG you will also lose. Then you realize it's a horror game. Which is true. But knowing that doesn't actually help you to win. What you eventually have to realize is that it's a First Person Resource Management game. Like, you're playing StarCraft from first person as a worker unit. Sort of.

The world has a very limited resource (Colour) and you must harvest, invest and utilitize Colour to solve all your problems. If you waste any, you will probably die, but you won't realize that for hours after you made the initial mistake.

Every NPC in the game will tell you things about how the world works, and every one of those NPCs (including your initial tutorial) is lying to you about at least one thing.

The game i... (read more)

2Emile11yHuh, sounds very interesting! So my awesome game concept would give rise to a lame game, eh? *updates* I hadn't heard of that game [http://en.wikipedia.org/wiki/The_Void_%28video_game%29], I might try it out. I'm actually surprised a game like that was made and commercially published.
5Raemon11yIt's a good game, just with a very narrow target audience. (This site is probably a good place to find players who will get something out of it, since you have higher than average percentages of people willing to take a lot of time to think about and explore a cerebral game). Some specific lessons I'd draw from that game and apply here: 1. Don't penalize failure too hard. The Void's single biggest issue (for me) is that even when you know what you're doing you'll need to experiment and every failure ends with death (often hours after the failure). I reached a point where every time I made even a minor failure I immediately loaded a saved game. If the purpose is to experiment, build the experimentation into the game so you can try again without much penalty (or make the penalty something that is merely psychological instead of an actual hampering of your ability to play the game.) 2. Don't expect players to figure things out without help. There's a difference between a game that teaches people to be rational and a game that simply causes non-rational people to quit in frustration. Whenever there's a rational technique you want people to use, spell it out. Clearly. Over and over (because they'll miss it the first time). The Void actually spells out everything as best they can, but the game still drives players away because the mechanics are simply unlike any other game out there. Most games rely on an extensive vocabulary of skills that players have built up over years, and thus each instruction only needs to be repeated once to remind you of what you're supposed to be doing. The Void repeats instructions maybe once or twice, and it simply isn't enough to clarify what's actually going on. (The thing where NPCs lie to you isn't even relevant till the second half of the game. By the time you get to that part you've either accepted how weird the game is or you've quit already). My sense is that the be
3NihilCredo11yIt was made by a Russian developer which is better known for its previous effort, Pathologic [http://en.wikipedia.org/wiki/Pathologic], a somewhat more classical first-person adventure game (albeit very weird and beautiful, with artistic echoes from Brecht to Dostoevskij), but with a similar problem of being murderously hard and deceptive - starving to death is quite common. Nevertheless, in Russia Pathologic had acceptable sales and excellent critical reviews, which is why Ice-Pick Lodge could go on with a second project.
2PeerInfinity11y"once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/tiny smiley faces/tiny copies of Eliezer Yudkowsky." See also: The Friendly AI Critical Failure Table [http://transhumanistwiki.com/wiki/Friendly_AI_Critical_Failure_Table] And I think all of the other suggestions you made in this comment would make an awesome game! :D
1Emile11yRiffing off my weird biology / chemistry thing: a game based on the breeding of weird creatures, by humans freshly arrived on the planet (add some dimensional travel if you want to justify weird chemistry - I'm thinking of Tryslmaistan [http://www.unicornjelly.com/codex1.html]. The catch is (spoiler warning!), the humans got the wrong rules for creature breeding, and some plantcrystalthingy they think is the creatures' food is actually part of their reproduction cycle, where some essential "genetic" information passes. And most of the things that look like in-game help and tutorials are actually wrong, and based on a model that's more complicated than the real one (it's just a model that's closer to earth biology).
6khafra11yI'm not sure if transformice [http://www.transformice.com/] counts as a rationalist game, but appears to be a bunch of multiplayer coordination problems, and the results [http://www.youtube.com/watch?v=GJReZRji7tg] seem to support ciphergoth's conjecture [http://lesswrong.com/lw/2p5/humans_are_not_automatically_strategic/2kur?c=1] on intelligence levels.
3Emile11yTransformice is awesome :D A game hasn't made me laugh that much for a long time. And it's about interesting, human things, like crowd behaviour and trusting the "leader" and being thrust in a position of responsibility without really knowing what to do ... oh, and everybody dying in funny ways.
3Perplexed11yOne way to achieve this is to make it a level-based puzzle game. Solve the puzzle suboptimally, and you don't get to move on. Of course, that means that you may need special-purpose programming at each level. On the other hand, you can release levels 1-5 as freeware, levels 6-20 as Product 1.0, and levels 21-30 as Product 2.0. The puzzles I am thinking of are in the field of game theory, so the strategies will include things like not cooperating (because you don't need to in this case), making and following through on threats, and similar "immoral" actions. Some people might object on ethical or political grounds. I don't really know how to answer except to point out that at least it is not a first-person shooter. Game theory includes many surprising lessons - particularly things like the handicap principle, voluntary surrender of power, rational threats, and mechanism design. Coalition games are particularly counter-intuitive, but, with experience, intuitively understandable. But you can even teach some rationality lessons before getting into games proper. Learn to recognize individuals, for example. Not all cat-creatures you encounter are the same character. You can do several problems involving probabilities and inference before the second player ever shows up.
3steven046111yText adventures seem suitable for this sort of thing, and are relatively easy to write [http://www.inform7.com]. They're probably not as good for mass appeal, but might be OK for mass nerd appeal. For these purposes, though, I'm worried that rationality may be too much of a suitcase term, consisting of very different groups of subskills that go well with very different kinds of game.
0CronoDAS11yAnother thing that's relatively easy to create is a Neverwinter Nights module, but you're pretty much stuck with the D&D mechanics if you go that route.

I'm a translator between people who speak the same language, but don't communicate.

People who act mostly based on their instincts and emotions, and those who prefer to ignore or squelch those instincts and emotions[1], tend to have difficulty having meaningful conversations with each other. It's not uncommon for people from these groups to end up in relationships with each other, or at least working or socializing together.

On the spectrum between the two extremes, I am very close to the center. I have an easier time understanding the people on each side than their counterparts do, it frustrates me when they miscommunicate, and I want to help. This includes general techniques (although there are some good books on that already), explanations of words or actions which don't appear to make sense, and occasional outright translation of phrases ("When they said X, they meant what you would have called Y").

Is this problem, or this skill, something of interest to the LW community at large? In the several days I've been here it's come up on comment threads a couple times. I have some notes on the subject, and it would be useful for me to get feedback on them; I'd like to some day... (read more)

4beriukay11yOne issue I've frequently stumbled across is the people who make claims that they have never truly considered. When I ask for more information, point out obvious (to me) counterexamples, or ask them to explain why they believe it, they get defensive and in some cases quite offended. Some don't want to ever talk about issues because they feel like talking about their beliefs with me is like being subject to some kind of Inquisition. It seems to me that people of this cut believe that to show you care about someone, you should accept anything they say with complete credulity. Have you found good ways to get people to think about what they believe without making them defensive? Do I just have to couch all my responses in fuzzy words? Using weasel words always seemed disingenuous to me, but if I can get someone to actually consider the opposition by saying things like "Idunno, I'm just saying it seems to me, and I might be wrong, that maybe gays are people and deserve all the rights that people get, you know what I'm saying?"

I've been on the other side of this, so I definitely understand why people react that way--now let's see if I understand it well enough to explain it.

For most people, being willing to answer a question or identify a belief is not the same thing as wanting to debate it. If you ask them to tell you one of their beliefs and then immediately try to engage them in justifying it to you, they feel baited and switched into a conflict situation, when they thought they were having a cooperative conversation. You've asked them to defend something very personal, and then are acting surprised when they get defensive.

Keep in mind also that most of the time in our culture, when one person challenges another one's beliefs, it carries the message "your beliefs are wrong." Even if you don't state that outright--and even in the probably rare cases when the other person knows you well enough to understand that isn't your intent--you're hitting all kinds of emotional buttons which make you seem like an aggressor. This is the result of how the other person is wired, but if you want to be able to have this kind of conversation, it's in your interest to work with it.

The corollary to the implied ... (read more)

2Morendil11yYes please. Does the term "bridger" ring a bell for you? (It's from Greg Egan's Diaspora, in case it doesn't, and you'd have to read it to get why I think that would be an apt name for what you're describing.)
0Relsqui11yIt doesn't, and I haven't, although I can infer at least a little from the term itself. Your call if you want to try and explain it or wait for me to remember, find a library that has it, acquire it, and read it before understanding. ;) Is there any specific subject under that umbrella which you'd like addressed? Narrowing the focus will help me actually put something together.
0Morendil11yThe Wikipedia page [http://en.wikipedia.org/wiki/Diaspora_%28novel%29] explains a little about Bridgers. I'm afraid if I knew how to narrow this topic down I'd probably be writing it up myself. :)
0Relsqui11yHmm. I'm wary of the analogy to separate species; humans treat each other enough like aliens as it is. But so noted, thank you.
1Rain11yI wanted to say thank you for providing these services. I like performing the same translations, but it appears I'm unable to be effective in a text medium, requiring immediate feedback, body language, etc. When I saw some of your posts on old articles, apparently just as you arrived, I thought to myself that you would genuinely improve this place in ways that I've been thinking were essential.
1Relsqui11yThanks! That's actually really reassuring; that kind of communication can be draining (a lot of people here communicate naturally in a way which takes some work for me to interpret as intended). It is good to hear that it seems to be doing some good.

[tl;dr: quest for some specific cryo data references]

I prepare to do my own, deeper evaluation of cryogenics. For that I read through many of the case reports on the Alcor and CI page. Due to my geographic situation I am particularly interested in the ability of actually getting a body from Europe, Germany over to their respective facilities. Now the reports are quite interesting and provide lots of insight into the process, but what I still look for are the unsuccessful reports. In which cases a signed up member was not brought in due to legal interference, next of kin decisions and the likes. Is anyone aware of a detailed log of those? Also I would like to see how many of the signed clients get lost due to the circumstances of their death.

0Document11yCan't help with your question, but speaking of Europe... [http://www.acceleratingfuture.com/michael/blog/2010/09/eucrio-good-news-for-european-cryonicists/] .

Eliezer was quite clear that he would do nothing that violates his own moral standards. He was also quite clear (though perhaps joking) that he didn't even want to continue to listen to folks who don't pay their fair share.

He was quite clear that he didn't want to continue listening to people who thought that arguing about the specific output of CEV, at the object level, was a useful activity, and that he would listen to anyone who could make substantive intellectual contributions to the actual problems at hand, regardless of their donations or lack thereof ("It goes without saying that anyone wishing to point out a problem is welcome to do so. Likewise for talking about the technical side of Friendly AI." — the part right after the last paragraph you quoted...). You are taking a mailing list moderation experiment and blowing it way out of proportion; he was essentially saying "In my experience, this activity is fun, easy, and useless, and it is therefore tempting to do it in place of actually helping; therefore, if you want take up people's time by doing that on SL4, my privately-operated discussion space that I don't actually have to let you use at all if I don'... (read more)

Once there is recursively improving AI, the human race is irrelevant; anyone planning to continue living under those conditions either has not thought things through, would be equally happy living in a permanent simulation, or as a wirehead.

This does not follow.

2billswift11yI see your point, I was reasoning from "the human race" (ie, humanity in general) to an unjustified claim about individual humans and what the "should" do or believe.

I want to write a post about an... emotion, or pattern of looking at the world, that I have found rather harmful to my rationality in the past. The closest thing I've found is 'indignation', defined at Wiktionary as "An anger aroused by something perceived as an indignity, notably an offense or injustice." The thing is, I wouldn't consider the emotion I feel to be 'anger'. It's more like 'the feeling of injustice' in its own right, without the anger part. Frustration, maybe. Is there a word that means 'frustration aroused by a perceived indignity, notably an offense or injustice'? Like, perhaps the emotion you may feel when you think about how pretty much no one in the world or no one you talk to seems to care about existential risks. Not that you should feel the emotion, or whatever it is, that I'm trying to describe -- in the post I'll argue that you should try not to -- but perhaps there is a name for it? Anyone have any ideas? Should I just use 'indignation' and then define what I mean in the first few sentences? Should I use 'adjective indignation'? If so, which adjective? Thanks for any input.

9Airedale11yThe words righteous indignation [http://en.wikipedia.org/wiki/Righteous_indignation] in combination are sufficiently well-recognized as to have their own wikipedia page. The page also says that righteous indignation has overtones of religiosity, which seems like a reason not to use it in your sense . It also says that it is akin to a "sense of injustice," but at least for me, that phrase doesn't have as much resonance. Edited to add this possibly relevant/interesting link [http://www.davidbrin.com/addiction.htm] I came across, where David Brin describes self-righteous indignation as addictive.
6Perplexed11yStrikes me as exactly the reason you should use it. What you are describing is indignation, it is righteous, and it is counterproductive in both rationalists and less rational folks for pretty much the same reasons.
6jimrandomh11yI noticed this emotion cropping up a lot when I read Reddit, and stopped reading it for that reason. It's too easy to, for example, feel outraged over a video of police brutality, but not notice that it was years ago and in another state and already resolved.
6Eliezer Yudkowsky11ySounds related to the failure class I call "living in the should-universe".
3Will_Newsome11yIt seems to be a pretty common and easily corrected failure mode. Maybe you could write a post about it? I'm sure you have lots of useful cached thoughts on the matter. Added: Ah, I'd thought you'd just talked about it at LW meetups, but a Google search reveals that the theme is also in Above-Average AI Scientists [http://lesswrong.com/lw/uc/aboveaverage_ai_scientists/] and Points of Departure [http://lesswrong.com/lw/tt/points_of_departure/].
5[anonymous]11yRighteous indignation is a good word for it. I, personally, see it as one of the emotional capacities of a healthy person. Kind of like lust. It can be misused, it can be a big time-waster if you let it occupy your whole life, but it's basically a sign that you have enough energy. If it goes away altogether, something may be wrong. I had a period a few years ago of something like anhedonia. The thing is, I also couldn't experience righteous indignation, or nervous worry, or ordinary irritability. It was incredibly satisfying to get them back. I'm not a psychologist at all, but I think of joy, anger, and worry (and lust) as emotions that require energy. The miserably lethargic can't manage them. So that's my interpretation and very modest defense of righteous indignation. It's not a very practical emotion, but it is a way of engaging personally with the world. It motivates you in the minimal way of making you awake, alert, and focused on something. The absence of such engagement is pretty horrible.
5komponisto11yInterestingly enough, this sounds like the emotion that (finally) induced me to overcome akrasia and write a post on LW for the first time [http://lesswrong.com/lw/1ir/survey_on_a_current_event/], which initiated what has thus far been my greatest period of development as a rationalist. It's almost as if this feeling is to me what plain anger is to Harry Potter(-Evans-Verres): something which makes everything seem suddenly clearer. It just goes to show how difficult the art of rationality is: the same technique that helps one person may hinder another.
4wedrifid11yThat could work well when backed up by with the description of just what you will be using the term to mean. I will be interested to read your post - from your brief introduction here I think I have had similar observations about emotions that interfere with thought, independent of raw overwhelm from primitives like anger.
3[anonymous]11yI've seen "moral indignation," which might fit (though I think "indignation" still implies anger). I've also heard people who feel that way describe the object of their feelings as "disgusting" or "offensive," so you could call it "disgust" or "being offended." Of course, those people also seemed angry. Maybe the non-angry version would be called "bitterness." As soon as I wrote the paragraph above, I felt sure that I'd heard "moral disgust" before. I googled it and the second link wasthis [http://www.webmd.com/news/20090226/moral-disgust-linked-to-primitive-emotion]. I don't know about the quality of the study, but you could use the term.
3David_Allen11yIn myself, I have labeled the rationality blocking emotion/behavior as defensiveness. When I am feeling defensive, I am less willing to see the world as it is. I bind myself to my context and it is very difficult for me to reach out and establish connections to others. I am also interested in ideas related to rationality and the human condition. Not just about the biases that arise from our nature, but about approaches to rationality that work from within our human nature. I have started an analysis of Buddhism from this perspective. At its core (ignoring the obvious mysticism), I see sort of a how-to guide for managing the human condition. If we are to be rational we need to be willing to see the world as it is, not as we want it to be.

As far as I know, Eliezer has never had anything to do with choices for Visiting Fellowship. As you know but some people on Less Wrong seem not to, Eliezer doesn't run SIAI. (In reality, SIAI is a wonderful example of the great power of emergence, and is indeed the first example of a superintelligent organization.) (Just kidding.)

EY is astounded that someone can understand this after a thorough explanation. Can it honestly be that hard to find someone who can follow that?

Yes. Nobody arrives from the factory with good rationality skills, so I look for learning speed. Compare the amount I had to argue with Marcello in the anecdote to the amount that other people are having to argue with you in this thread.

In the spirit of "the world is mad" and for practical use, NYT has an article titled Forget what you know about good study habits.

2Matt_Simpson11ySomething I learned myself that the article supported: taking tests increases retention Something I learned from the article: varying study location increases retention.

Singularity Summit AU
Melbourne, Australia
September 7, 11, 12 2010

More information including speakers at http://summit.singinst.org.au.
Register here.

1wedrifid11yWow. Next Tuesday and in my hometown! Nice.

I just discovered (when looking for a comment about an Ursula Vernon essay) that the site search doesn't work for comments which are under a "continue this thread" link. This makes site search a lot less useful, and I'm wondering if that's a cause of other failed searches I've attempted here.

2jimmy11yI've noticed this too. There's no easy way to 'unfold all' is there?

Yes, you do. Many people who have highly developed theories of mind seem to underestimate how much unconscious processing they are doing that is profoundly difficult for people to do who don't have as developed theories of mind. People who are mildly on the autism spectrum in particular (generally below the threshold of diagnosis) often have a lot of difficulty with this sort of unconscious processing but can if given a lot of explicit rules or heuristics do a much better job.

0eugman11yThank you. I believe I may fall in this category. I am highly quantitative and analytical, often to my detriment.

You have just claimed that a document that says that people have to pay for the privilege to discuss what a hypothetical program might do describes how you can pay to attain "enormous power over the future of mankind." Worse yet, the program in question is designed in part to prevent any one person from gaining power over the future of mankind.

I cannot see any explanation for your misinterpretation other than willful ignorance.

6Will_Newsome11yIt'd be rather easy to twist my words here, but in the case of extrapolated volition it's not like one person gaining the power over the future of mankind is a dystopia or anything. Let's posit a world where Marcello goes rogue and compiles Marcello-extrapolating AGI. (For anyone who doesn't know Marcello, he's a super awesome guy.) I bet that the resultant universe wouldn't be horrible. Extrapolated-Marcello probably cares about the rest of humanity about as much as extrapolated-humanity does. As humans get smarter and wiser it seems they have a greater appreciation for tolerance, diversity, and all those other lovey-dovey liberal values we implicitly imagine to be the results of CEV. It is unlikely that the natural evolution of 'moral progress' as we understand it will lead to Marcello's extrapolated volition suddenly reversing the the trend and deciding that all other human beings on Earth are essentially worthless and deserve to be stripped to atoms to be turned into a giant Marcello-Pleasure-Simulating Computer. (And even if it did, I believe humans in general are probabilistically similar enough to Marcello that they would count this as positive sum if they knew more and thought faster; but that's a more intricate philosophical argument that I'd rather not defend here.) There are some good arguments to be made for the psychic diversity of mankind, but I doubt the degree of that diversity is enough to tip the scales of utility from positive to negative. Not when diversity is something we seem to have come to appreciate more over time. This intuition is the result of probably-flawed but at least causal and tractable reasoning about trends in moral progress and the complexity and diversity of human goal structures. It seems that too often when people guess what the results of extrapolated volition will be they use it as a chance to profess and cheer, not carefully predict. (This isn't much of a response to anything you wrote, katydee; apologies for that. Did
3NancyLebovitz11yThat "best self" thing makes me nervous. People have a wide range of what they think they ought to be, and some of those dreams are ill-conceived.
4Nisan11yThe scariest kind of dream, perhaps, is exemplified by someone with merely human intelligence who wants to hastily rewrite their own values to conform to their favorite ideology. We'd want an implementation of CEV to recognize this as a bad step in extrapolation. The question is, how do we define what is a "bad step"?
4Perplexed11yI, on the other hand, would trace the source of my "misinterpretation" to LucasSloane's answer to this comment by me [http://lesswrong.com/lw/2nz/less_wrong_open_thread_september_2010/2kvk?c=1&context=1#2kvk] . I include Eliezer's vehement assurances that Robin Hanson (like me) is misinterpreting. (Sibling comment to your own.) But it is completely obvious that Eliezer wishes to create his FAI quickly and secretly expressly because he does not wish to have to deal with the contemporaneous volition of mankind. He simply cannot trust mankind to do the right thing, so he will do it for them. I'm pretty sure I am not misinterpreting. If someone here is misinterpreting by force of will, it may be you.
6DSimon11yEliezer's motivations aside, I'm confused about the part where you say that one can pay to get influence over CEV/FAI. The payment is for the privilege to argue about what sort of world a CEV-based AI would create. You don't have to pay to discuss (and presumably, if you have something really insightful to contribute, influence) the implementation of CEV itself.
1Perplexed11yUpon closer reading, I notice that you are trying to draw a clear distinction between the implementation of CEV and the kind of world CEV produces. I had been thinking that the implementation would have a big influence on the kind of world. But you may be assuming that the world created by the FAI, under the guidance of the volition of mankind really depends on that volition and not on the programming fine print that implements "coherent" and "extrapolated". Well, if you think that, and the price tags only buy you the opportunity to speculate on what mankind will actually want, ... well ..., yes, that is another possible interpretation.

Yeah. When I read that pricing schedule, what I see is Eliezer preempting:

  • enthusiastic singularitarians whiling away their hours dreaming about how everyone is going to have a rocketpack after the Singularity;

  • criticism of the form "CEV will do X, which is clearly bad. Therefore CEV is a bad idea." (where X might be "restrict human autonomy"). This kind of criticism comes from people who don't understand that CEV is an attempt to avoid doing any X that is not clearly good.

The CEV document continues to welcome other kinds of criticism, such as the objection that the coherent extrapolated volition of the entire species would be unacceptably worse for an individual than that of 1000 like-minded individuals (Roko says something like this) or a single individual (wedrifid says something like this) -- the psychological unity of mankind notwithstanding.

1Perplexed11yLook down below the payment schedule (which, of course, was somewhat tongue in cheek) to the Q&A where Eliezer makes clear that the SIAI and their donors will have to make certain decisions based on their own best guesses, simply because they are the ones doing the work.
2Will_Newsome11yI'm confused. You seem to be postulating that the SIAI would be willing to sell significant portions of the light cone for paltry sums of money. This means that either the SIAI is the most blatantly stupid organization to ever have existed, or you were a little too incautious with your postulation.
1[anonymous]11yI'd guess he wants to create FAI quickly because, among other things, ~150000 people are dying each day. And secretely because there are people who would build and run an UFAI without regard for the consequences and therefore sharing knowledge with them is a bad idea. I believe that even if he wanted FAI only to give people optional immortality and not do anything else, he would still want to do it quickly and secretly.

anyone planning to continue living under those conditions either has not thought things through, would be equally happy living in a permanent simulation, or as a wirehead.

Your existence may not be relevant to the rest of the universe after friendly AI, but that doesn't mean that you would be a wirehead. I want to live a life of genuine challenges, but I really wish that it didn't have to be in a world of genuine suffering.

The key to persuasion or manipulation is plausible appeal to desire. The plausibility can be pretty damned low if the desire is strong enough.

I participated in a survey directed at atheists some time ago, and the report has come out. They didn't mention me by name, but they referenced me on their 15th endnote, which regarded questions they said were spiritual in nature. Specifically, the question was whether we believe in the possibility of human minds existing outside of our bodies. From the way they worded it, apparently I was one of the few not-spiritual people who believed there were perfectly naturalistic mechanisms for separating consciousness from bodies.

I'm taking a grad level stat class. One of my classmates said something today that nearly made me jump up and loudly declare that he was a frequentist scumbag.

We were asked to show that a coin toss fit the criteria of some theorem that talked about mapping subsets of a sigma algebra to form a well-defined probability. Half the elements of the set were taken care of by default (the whole set S and its complement { }), but we couldn't make any claims about the probability of getting Heads or Tails from just the theorem. I was content to assume the coin wa... (read more)

I meant it more generally. You're seeing one tiny slice of a person's history almost certainly caused by an uncharacteristic lapse in judgment and using it to determine their personality traits when you have strong countervailing evidence that SIAI has a history of only employing the very brightest people. Indeed, here Eliezer mentioned that Marcello worked on a math problem with John Conway: The Level Above Mine. Implied by the text is that Eliezer believes Marcello to be close enough to Eliezer's level to be able to roughly judge Eliezer's intelligence. ... (read more)

Well, there are other ways in NetHack to identify things besides the "identify" spell (which itself must be identified anyways). You can:

  • Try it out on yourself. This is often definitive, but also often dangerous. Say if you drink a potion, it might be a healing spell... or it might be poison... or it might be fruit juice. 1/3 chance of existential failure for a given experiment is crappy odds; knowledge isn't that valuable.

  • Get an enemy to try it. Intelligent enemies will often know the identies of scrolls and potions you aren't yet familiar w

... (read more)
8Alicorn11yThis reminds me of something I did in a D&D game once. My character found three unidentified cauldronsful of potions, so she caught three rats and dribbled a little of each on a different rat. One rat died, one turned to stone, and one had no obvious effects. (She kept the last rat and named it Lucky.)
1CronoDAS11yYes. That's why isn't quite the perfect solution: you can still look up a "cookbook" set of experiments to distinguish between Potion That Works and Potion That Will Get You Killed.
7Raemon11yTo be fair, in real life, it's perfectly okay that once you determine the right set of experiments to run to analyze a particular phenomena, you can usually use similar experiments to figure out similar phenomena. I'm less worried about infinite replay value and more worried about the game being fun the first time through.
2JGWeissman11yCookbook experiments will suffice if you are handed potions that may have a good effect or that may kill you. But if you have to figure out how to mix the potion [http://wiki.lesswrong.com/wiki/Locate_the_hypothesis] yourself, this is much more difficult. Learning the cookbook experiments could be the equivalent of learning chemistry.

I just listened to Robin Hanson's pale blue dot interview. It sounds like he focuses more on motives than I do.

Yes, if you give most/all people a list of biases, they will use it less like a list of potential pitfalls and more like a list of accusations. Yes, most, if not all, aren't perfect truth-seekers for reasons that make evolutionary sense.

But I wouldn't mind living in a society where using biases/logical fallacies results in a loss of status. You don't have to be a truth-seeker to want to seem like a truth-seeker. Striving to overcome bias still see... (read more)

The journalistic version:

[T]hose who abstain from alcohol tend to be from lower socioeconomic classes, since drinking can be expensive. And people of lower socioeconomic status have more life stressors [...] But even after controlling for nearly all imaginable variables - socioeconomic status, level of physical activity, number of close friends, quality of social support and so on - the researchers (a six-member team led by psychologist Charles Holahan of the University of Texas at Austin) found that over a 20-year period, mortality rates were highest fo

... (read more)
5Vladimir_M11yThe study looks at people over 55 years of age. It is possible that there is some sort of selection effect going on -- maybe decades of heavy drinking will weed out all but the most alcohol-resistant individuals, so that those who are still drinking heavily at 55-60 without ever having been harmed by it are mostly immune to the doses they're taking. From what I see, the study controls for past "problem drinking" (which they don't define precisely), but not for people who drank heavily without developing a drinking problem, but couldn't handle it any more after some point and decided themselves to cut back. Also, it should be noted that papers of this sort use pretty conservative definitions of "heavy drinking." In this paper, it's defined as more than 42 grams of alcohol per day, which amounts to about a liter of beer or three small glasses of wine. While this level of drinking would surely be risky for people who are exceptionally alcohol-intolerant or prone to alcoholism, lots of people can handle it without any problems at all. It would be interesting to see a similar study that would make a finer distinction between different levels of "heavy" drinking.
4cousin_it11yThese are fine conclusions to live by, as long as moderate drinking doesn't lead you to heavy drinking, cirrhosis and the grave. Come visit Russia to take a look.
1Vladimir_M11yThe discussion of the same paper on Overcoming Bias has reminded me [http://www.overcomingbias.com/2010/09/alcohol-is-healthy.html#comment-453500] of another striking correlation I read about recently: http://www.marginalrevolution.com/marginalrevolution/2010/07/beer-makes-bud-wiser.html [http://www.marginalrevolution.com/marginalrevolution/2010/07/beer-makes-bud-wiser.html] It seems that for whatever reason, abstinence does correlate with lower performance on at least some tests of mental ability. The question is whether the controls in the study cover all the variables through which these lower abilities might have manifested themselves in practice; to me it seems quite plausible that the answer could be no.
2Morendil11yA hypothesis [http://www.wired.com/wiredscience/2010/09/why-alcohol-is-good-for-you]: drinking is social, and enjoying others' company plays a role in survival (perhaps in learning too?).

I'm writing a post on systems to govern resource allocation, is anyone interested in having any input into it or just proof reading it?

This is the intro/summary:

How do we know what we know? This is an important question, however there is another question which in some ways is more fundamental, why did we choose to devote resources to knowing those things in the first place?

As a physical entity the production of knowledge take resources that could be used for other things, so the problem expands to how to use resources in general. This I'll call the resou

... (read more)
7Snowyowl11yThis sounds interesting and relevant. Here's my input: I read this back in 2008 and I am summarising it from memory, so I may make a few factual errors. But I read that one of the problem facing large Internet companies like Google is the size of their server farms, which need cooling, power, space, etc. Optimising the algorithms used can help enormously. A particular program was responsible for allocating system resources so that the systems which were operating were operating at near full capacity, and the rest could be powered down to save energy. Unfortunately, this program was executed many times a second, to the point where the savings it created were much less than the power it used. The fix was simply to execute it less often. Running the program took about the same amount of time no mater how many inefficiencies it detected, so it was not worth checking the entire system for new problems if you only expected to find one or two. My point: To reduce resources spent on decision-making, make bigger decisions but make them less often. Small problems can be ignored fairly safely, and they may be rendered irrelevant once you solve the big ones.
4Oscar_Cunningham11yI was having similar thoughts the other day while watching a reality TV show where designers competed for a job from Philippe Starck. Some of them spent ages trying to think of a suitable project, and then didn't have enough time to complete it; some of them launched into the first plan they had and it turned out rubbish. Clearly they needed some meta-planning. But how much? Well, they'll need to do some meta-meta planning... I'd be happy to give your post a read through. ETA: The buck stops immediately, of course.
1xamdam11yUpvoted for importance of subject - looking forward to the post. Have you read up on Information Foraging?
0whpearson11yI'm going to be discussing the organisational design level, rather than a strategic or tactical level of resource management.

In "The Shallows", Nicholas Carr makes a very good argument that replacing deep reading books, with the necessarily shallower reading online or of hypertext in general, causes changes in our brains which makes deep thinking harder and less effective.

Thinking about "The Shallows" later, I realized that laziness and other avoidance behaviors will also tend to become ingrained in your brain, at the expense of your self-direction/self-discipline behaviors they are replacing.

Another problem with the Web, that wasn't discussed in "The Sh... (read more)

9PhilGoetz11yI haven't read Nicholas Carr, but I've seen summaries of some of the studies used to claim that book reading results in more comprehension than hypertext reading. All the ones I saw are bogus. They all use, for the hypertext reading, a linear extract from a book, broken up into sections separated by links. Sometimes the links are placed in somewhat arbitrary places. Of course a linear text can be read more easily linearly. I believe hypertext reading is deeper, and that this is obvious, almost true by definition. Non-hypertext reading is exactly 1 layer deep. Hypertext lets the reader go deeper. Literally. You can zoom in on any topic. A more fair test would be to give students a topic to study, with the same material, but some given books, and some given the book material organized and indexed in a competent way as hypertext. Hypertext reading lets you find your own connections, and lets you find background knowledge that would otherwise simply be edited out of a book.
8allenwang11yIt seems to me that the main reason most hypertext sources seem to produce shallower reading is not the fact that it contains hypertext itself, but that the barriers of publication are so low that the quality of most written work online is usually much lower than printed material. For example, this post is something that I might have spent 3 minutes thinking about before posting, whereas a printed publication would have much more time to mature and also many more filters such as publishers to take out the noise. It is more likely that book reading seems more deep because the quality is better. Also, it wouldn't be difficult to test this hypothesis with print and online newspaper since they both contain the same material.

It seems to me like "books are slower to produce than online material, so they're higher quality" would belong to the class of statements that are true on average but close to meaningless in practice. There's enormous variance in the quality of both digital and printed texts, and whether you absorb more good or bad material depends more on which digital/print sources you seek out than on whether you prefer digital or print sources overall.

1SilasBarta11yAgree completely. While most of what's on the internet is low-quality, it's easy to find the domains of reliably high-quality thought. I've long felt that I get more intellectual stimulation from a day of reading blogs than I've gotten from a lifetime of reading printed periodicals.
4xamdam11yIt has deeper structure, but that is not necessarily user-friendly. A great textbook will have different levels of explanation, an author-designed depth-diving experience. Depending on author, material, you and the local wikipedia quality that might be a better or worse learning experience. Yep, definitely a benefit, but not without a trade-off. Often a good author will set you up with connections better than you can.
0PhilGoetz11yBut not better than a good hypertext author can.
0xamdam11yIf the hypertext is intentionally written as a book, which is generally not the case.
3jacob_cannell11yI like allenwang's reply below, but there is another consideration with books. Long before hyperlinks, books evolved comprehensive indices and references, and these allow humans to relatively easily and quickly jump between topics in one book and across books. Now are the jumps we employ on the web faster? Certainly. But the difference is only quantitative, not qualitative, and the web version isn't enormously faster.
3JohnDavidBustard11yIt is very difficult to distinguish rationalisations of the discomfort of change, with actual consequences. If this belief that hypertext leads to a less sophisticated understanding than reading a book, what behaviour would change that could be measured?

Can anyone suggest any blogs giving advice for serious romantic relationships? I think a lot of my problems come from a poor theory of mind for my partner, so stuff like 5 love languages and stuff on attachment styles has been useful.


6Relsqui11yI have two suggestions, which are not so much about romantic relationships as they are about communicating clearly; given your example and the comments below, though, I think they're the kind of thing you're looking for. The Usual Error [http://usualerror.com] is a free ebook (or nonfree dead-tree book) about common communication errors and how to avoid them. (The "usual error" of the title is assuming by default that other people are wired like you--basically the same as the typical psyche fallacy [http://lesswrong.com/lw/dr/generalizing_from_one_example/]. It has a blog as well, although it doesn't seem to be updated much; my recommendation is for the book. If you're a fan of the direct practical style of something like LW, steel yourself for a bit of touchy-feeliness in UE, but I've found the actual advice very useful. In particular, the page about the biochemistry of anger [http://usualerror.com/e-book/the-william-james-zone/] has been really helpful for me in recognizing when and why my emotional response is out of whack with the reality of the situation, and not just that I should back off and cool down, but why it helps to do so. I can give you an example of how this has been useful for me if you like, but I expect you can imagine. A related book I'm a big fan of is Nonviolent Communication (no link because its website isn't of any particular use; you can find it at your favorite book purveyor or library). Again, the style is a bit cloying, but the advice is sound. What this book does is lay out an algorithm for talking about how you feel and what you need in a situation of conflict with another person (where "conflict" ranges from "you hurt my feelings" to gang war). I think it's noteworthy that following the NVC algorithm is difficult. It requires finding specific words to describe emotions, phrasing them in a very particular way, connecting them to a real need, and making a specific, positive, productive request for something to change. For people who
1eugman11yBoth look rather useful, thanks for the suggestions. Also, Google Books has Nonviolent Communication [http://books.google.com/books?id=nY4tDDO93E8C&lpg=PP1&dq=nonviolent%20communication&pg=PP1#v=onepage&q&f=false] .
0Relsqui11yYou're welcome, and thanks--that's good to know. I'll bookmark it for when it comes up again.
0pjeby11yI rather liked the page about how we're made of meat [http://usualerror.com/e-book/i-already-know-this-but-i-need-to-hear-you-say-it-again/] . Thanks for the cool link!
1Relsqui11yYou're welcome! Glad you like it. I'm a fan of that particular page as well--it's probably the technique I refer to/think about explicitly from that book second most, after the usual error itself. It's valuable to be able to separate the utility of hearing something to gain knowledge and that of hearing something you already know to gain reassurance--it just bypasses a whole bunch of defensiveness, misunderstanding, or insecurity that doesn't need to be there.
1rhollerith_dot_com11yI could point to some blogs whose advice seems good to me, but I won't because I think I can help you best by pointing only to material (alas no blogs though) that has actually helped me in a serious relationship -- there being a huge difference in quality between advice of the form "this seems true to me" and advice of the form "this actually helped me". What has helped me more in my relationships than any other information has is the non-speculative parts of the consensus among evolutionary psychologists on sexuality because they provide a vocabulary for me to express hypotheses (about particular situations I was facing) and a way for me to winnow the field of prospective hypotheses and bits of advice I get online from which I choose hypotheses and bits of advice to test. In other words, ev psy allows me to dismiss many ideas so that I do not incur the expense of testing them. I needed a lot of free time however to master that material. Probably the best way to acquire the material is to read the chapters on sex in Robert Wright's Moral Animal. I read that book slowly and carefully over 12 months or so, and it was definitely worth the time and energy. Well, actually the material in Moral Animal on friendship (reciprocal altruism) is very much applicable to serious relationships, too, and the stuff on sex and friendship together form about half the book. Before I decided to master basic evolutionary psychology in 2000, the advice that helped me the most was from John Gray, author of Men Are From Mars, Women Are From Venus. Analytic types will mistrust author and speaker John Gray because he is glib and charismatic (the Maharishi or such who founded Transcendental Meditation once offered to make Gray his successor and the inheritor of his organization) but his pre-year-2000 advice is an accurate map of reality IMHO. (I probably only skimmed Mars and Venus, but I watched long televised lectures on public broadcasting that probably covered the same material.)

No, it was "a failure to [immediately] recognize what counts as understanding and solving a [particular] problem", but that is a rationality skill, and is not entirely a function of a person's native general intelligence. Having a high g gives you an advantage in learning and/or independently inventing rationality skills, but not always enough of an advantage. History is littered with examples of very smart people committing rationality failures much larger than postulating "complexity" as a solution to a problem.

His mistake was entirely situational, given the fact that he understood a minute later what he had done incorrectly and probably rarely or never made that mistake again.

You seem to be taking CEV seriously - which seems more like a kind of compliment.

Of course I take it seriously. It is a serious response to a serious problem from a serious person who takes himself entirely too seriously.

And it is probably the exactly wrong solution to the problem.

So: you're here to SAVE THE WORLD. What do you say to something like that?

I would start by asking whether they want to save it like Noah did, or like Ozymandius did, or maybe like Borlaug did. Sure doesn't look like a Borlaug "Give them the tools" kind of save at all.

Relevant to our akrasia articles:

If obese individuals have time-inconsistent preferences then commitment mechanisms, such as personal gambles, should help them restrain their short-term impulses and lose weight. Correspondence with the bettors confirms that this is their primary motivation. However, it appears that the bettors in our sample are not particularly skilled at choosing effective commitment mechanisms. Despite payoffs of as high as $7350, approximately 80% of people who spend money to bet on their own behaviour end up losing their bets.

http... (read more)

1Sniffnoy11yI recall someone claiming here earlier that they could do anything if they bet they could, though I can't find it right now. Useful to have some more explicit evidence about that.

AI's taking over the world isn't a difficult concept to get across

AIs taking over the world because they have implausibly human-like cognitive architectures and they hate us or resent us or desire higher status than us is an easy concept to get across. It is also, of course, wrong. An AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn't even have a term for human values is more difficult; because of anthropomorphic bias, it will be much less salient to people, even if it is more probable.

1JamesAndrix11yThey have the right conclusion (plausible AI takeover) for slightly wrong reasons. "hate [humans] or resent [humans] or desire higher status than [humans]" are slightly different values than ours (even if just like the values humans often have towards other groups) So we can gradually nudge people closer to the truth a bit at a time by saying "Plus, it's unlikely that they'll value X, so even if they do something with the universe it will not have X" But we don't have to introduce them to the full truth immediately, as long as we don't base any further arguments on falsehoods they believe. If someone is convinced of the need for asteroid defense because asteroids could destroy a city, you aren't obligated to tell them that larger asteroids could destroy all humanity when you're asking for money. Even if you believe bigger asteroids to be more likely. I don't think it's dark epistemology to avoid confusing people if they've already got the right idea.
3Vladimir_Nesov11yWriting up high-quality arguments for your full position might be a better tool than "nudging people closer to the truth a bit at a time". Correct ideas have a scholar appeal due to internal coherence, even if they need to overcome plenty of cached misconceptions, but making that case requires a certain critical mass of published material.
2JamesAndrix11yI do see value in that, but I'm thinking of a TV commercial or youtube video with a terminator style look and feel. Though possibly emphasizing that against real superintelligence, there would be no war. I can't immediately remember a way to simplify "the space of all possible values is huge and human like values are a tiny part of that" and I don't think that would resonate at all.

This is perhaps a bit facetious, but I propose we try to contact Alice Taticchi (Miss World Italy 2009) and introduce her to LW. Reason? She cited she'd "bring without any doubt my rationality", among other things, when asked what qualities she would bring to the competition.

I have argued in various places that self-deception is not an adaptation evolved by natural selection to serve some function. Rather, I have said self-deception is a spandrel, which means it’s a structural byproduct of other features of the human organism. My view has been that features of mind that are necessary for rational cognition in a finite being with urgent needs yield a capacity for self-deception as a byproduct. On this view, self-deception wasn’t selected for, but it also couldn’t be selected out, on pain of losing some of the beneficial featur

... (read more)

Anyone here working as a quant in the finance industry, and have advice for people thinking about going into the field?

4kim011yI am, and I am planning to leave it to get a higher more average pay. From my viewpoint, it is terribly overrated and undervalued.
6Daniel_Burfoot11yCan you expand on this? Do you think your experience is typical?
3kim011yMost places I have worked, the reputation of the job has been quite different from the actual job. I have compared my experiences with those of friends and colleagues, and they are relatively similar. Having a M.Sc. in physics and lots of programming experience made it possible for me to have more different kinds of engineering jobs, and thus more varied experience. My conclusion is that the anthropic principle holds for me in the work place, so that each time I experience Dilbertesque situations, they are representative of typical work situations. So yes, I do think my work situation is typical. My current job doing statistical analysis for stock analysts pay $ 73 000, while the average pay elsewhere is $ 120 000.
4xamdam11yPing Arthur breitman fb or linked in. He is part of NYC lw meetup, and a quant at goldman.

In light of the news that apparently someone or something is hacking into automated factory control systems, I would like to suggest that the apocalypse threat level be increased from Guarded (lots of curious programmers own fast computers) to Elevated (deeply nonconclusive evidence consistent with a hard takeoff actively in progress).

5jimrandomh11yIt looks a little odd for a hard takeoff scenario - it seems to be prevalent only in Iran, it seems configured to target a specific control system, and it uses 0-days wastefully (I see a claim that it uses four 0-days and 2 stolen certificates). On the other hand, this is not inconsistent with an AI going after a semiconductor manufacturer and throwing in some Iranian targets as a distraction. My preference ordering is friendly AI, humans, unfriendly AI; my probability ordering is humans, unfriendly AI, friendly AI.

a distinguishing feature of trolls is that they enjoy provoking an emotional response in others while on the other hand I find it unsavoury

For what it's worth, it is very hard to distinguish between someone who is deliberately provoking a negative reaction and someone who is not very practiced at anticipating what choices of language or behavior might cause one. I, like datadataeverywhere, did get the impression that you were at least one of those things; off the top of my head, here are a few specific reasons:

  • Your initial comment disagreed with my te
... (read more)

In light of XFrequentist's suggestion in "More Art, Less Stink," would anyone be interested in a post consisting of a summary & discussion of Cialdini's Influence?

This is a brilliant book on methods of influencing people. But it's not just Dark Arts - it also includes defense against the Dark Arts!

0jimmy11yI just finished reading that book. It is mostly from a "defense against" perspective. Reading the chapter names provides a decent [extremely short] summary, and I expect that you're already aware that they are influences. That said, when I read through it, there were a lot of "Aha!" moments, when I realized something I've seen was actually a well thought out 'weapon of influence'- and now my new hobby is saying "Chapter 3: Commitment and Consistency!" every time I see it used as persuasion. The whole book is hard to put down, and makes me want to quote part of it to the nearest person in about every paragraph or two. I'd consider writing such a post, but I'm not sure how to compress it- the very basics should be obvious to the regulars here, but the details take time to flush out.
0CronoDAS11yYes, I would like such a post.

Very smart people can still make trivial errors if they're plunging into domains that they're not used to thinking about. Intelligence is not rationality.

[-][anonymous]11y 5

You're discounting the reasoning showing that Eliezer's behavior is consistent with him being a good guy and claiming that it is merely a distraction. You haven't justified those statements -- they are supposed to be "obvious".

What do you think you know and how do you think you know it? You make statements about the real motivations of Eliezer Yudkowsky. Do you know how you have arrived at those beliefs?

SIAI has money. Not a ton of it, but enough that they don't have to sell shares. The AGI programmers would much, much, much sooner extrapolate only their values than accept a small extremely transitory reward in exchange for their power. Of note is that this completely and entirely goes against all the stated ethics of SIAI. However, I realize that stated ethics don't mean much when that much power is on the line, and it would be silly to assume the opposite.

That said, this stems from your misinterpretation of the CEV document. No one has ever interpreted ... (read more)

I don't recall ever doing that.

Do you leave the votes stand because you remember/re-invent your original reason for upvoting, or because something along the lines of "well, I must've had a good reason at the time"?

No, here is our definition of rationality.

For the canonical article, see What Do We Mean By "Rationality"?.

3JanetK11yThank you. That seems clear. I will assume that my antennas were giving me the wrong impression. I can relax/
2[anonymous]11yMaybe you shouldn't relax. Regardless of official definitions, there is in practice a heavy emphasis on conceptual rigor over evidence. There's still room for people who don't quite fit in.
1Sniffnoy11yAh, that does seem to be better, yes.

Idea - Existential risk fighting corporates

People of normal IQ are advised to work our normal day job, the best competency that we have and after setting aside enough money for ourselves, contribute to prevention of existential risk. That is a good idea if the skills of the people here are getting their correct market value and there is such a diversity of skills that they cannot make a sensible corporation together.

Also, consider that as we make the world's corporations more agile, we bring closer the moment where an unfriendly optimization process might ... (read more)

3wedrifid11yThe would be more likely to work if you completely took out the 'for existential risk' part. Find a way to cooperate with people effectively "to make money". No need to get religion all muddled up in it.

I would like to see more on fun theory. I might write something up, but I'd need to review the sequence first.

Does anyone have something that could turn into a top level post? or even a open thread comment?

I used to be a professional games programmer and designer and I'm very interested in fun. There are a couple of good books on the subject: A theory of fun and Rules of play. As a designer I spent many months analyzing sales figures for both computer games and other conventional toys. The patterns within them are quite interesting: for example child's toys pass from amorphous learning tools (bright objects and blobby humanoids), through mimicking parents (accurate baby dolls), to mimicking older children (sexualised dolls and makeup). My ultimate conclusions were that fun takes many forms whose source can be ultimately reduced to what motivates us. In effect, fun things are mental hacks of our intrinsic motivations. I gave a couple of talks on my take on what these motivations are. I'd be happy to repeat this material here (or upload and link to the videos if people prefer).

4Mass_Driver11yI found Rules of Play to be little more than a collection of unnecessary (if clearly-defined) jargon and glittering generalities about how wonderful and legitimate games are. Possibly an alien or non-neurotypical who had no idea what a game was might gather some idea of games from reading the book, but it certainly didn't do anything for me to help me understand games better than I already do from playing them. Did I miss something?
5JohnDavidBustard11yYes I take your point. There isn't a lot of material on fun, and game design analysis is often very genre specific. I like rules of play, not so much because it provides great insight into why games are fun but more as a first step towards being a bit more rigorous about what game mechanics actually are. There is definitely a lot further to go and there is a tendency to ignore the cultural and psychological motivations (e.g. why being a gangster and free roaming mechanics work well together) in favour of analysing abstract games. However it is fascinating to imagine a minimal game, in fact some of the most successful game titles have stripped the interactions down to their most basic motivating mechanics (Farmville or Diablo for example) To provide a concrete example, I worked on a game (Medievil Resurrection) where the player controlled a crossbow in a minigame, by adjusting the speed and acceleration of the mapping between joystick and bow the sensation of controlling it passed through distinct stages. As the parameters approach the sweet spot, my mind (and that of other testers) experienced a transition from feeling I was controlling the bow indirectly to feeling like I was holding the bow. Deviating slightly around this value adjusted its perceived weight, but there was a concrete point at which this sensation was lost. Although Rules of Play does not cover this kind of material it did feel for me like an attempt to examine games in a more general way so that these kinds of element could be extracted from their genre specific contexts and be understood in isolation.
6komponisto11yI've long had the idea of writing a sequence on aesthetics; I'm not sure if and when I'll ever get around to it, however. (I have a fairly large backlog of post ideas that have yet to be realized.)

2 is ambiguous. Getting to the stars requires a number of things to go right. Eliezer serves relatively little use in preventing a major nuclear exchange in the next 10 years, or bad nanotech , or garage made bio weapons, or even UFAI development.

FAI is just the final thing that needs to go right, everything else needs to go mostly right until then.

3Snowyowl11yAnd I can think of a few ways humanity can get to the stars even if FAI never happens.

Is there enough interest for it to be worth creating a top level post for an open thread discussing Eliezer's Coherent Extrapolated Volition document? Or other possible ideas for AGI goal systems that aren't immediately disastrous to humanity? Or is there a top level post for this already? Or would some other forum be more appropriate?

An observer is given a box with a light on top, and given no information about it. At time t0, the light on the box turns on. At time tx, the light is still on.

At time tx, what information can the observer be said to have about the probability distribution of the duration of time that the light turns on? Obviously the observer has some information, but how is it best quantified?

For instance, the observer wishes to guess when the light will turn off, or find the best approximation of E(X | X > tx-t0), where X ~ duration of light being on. This is guarant... (read more)

Gore Vidal once said: "It is not enough to succeed. Others must fail."

I'm willing to be the one who fails, just so long as the one who succeeds pays sufficient compensation. If ve is unwilling to pay, then I intend to make ver life miserable indeed.

Nash bargaining with threats

Edit: typos

0timtyler11yI expect considerable wailing and gnashing of teeth. There is plenty of that in the world today - despite there not being a big shortage of economists who would love to sort things out, in exchange for a cut. Perhaps, the wailing is just how some people prefer to negotiate their terms.

We already know what obstacles stand in the way of achieving consensus - people have different abilities and propensities, and want different things.

It is funny how training in economics make you see everything in a different light. Because an economist would say, "'different abilities and propensities, and want different things'? Great? People want things that other people can provide. We have something to work with! Reaching consensus is simply a matter of negotiating the terms of trade."

to get a better feel for whether having one's own volition overruled by the coherent extrapolated volition of mankind is something one really wants.

Hell no.

Am I to interpret that expletive as expressing that you already have a pretty good feel regarding whether you would want that?

To my mind, the really important question is whether we have one-big-AI which we hope is friendly, or an ecosystem of less powerful AIs and humans cooperating and competing under some kind of constitution. I think that the latter is the obvious way to go.

Sounds like a

... (read more)

Eliezer wrote:

I have said it over and over. I truly do not understand how anyone can pay any attention to anything I have said on this subject, and come away with the impression that I think programmers are supposed to directly impress their non-meta personal philosophies onto a Friendly AI.

The good guys do not directly impress their personal values onto a Friendly AI.

Actually setting up a Friendly AI's values is an extremely meta operation, less "make the AI want to make people happy" and more like "superpose the possible reflective equi

... (read more)

Finally Prompted by this, but it would be too offtopic there


The ideas really started forming around the recent 'public relations' discussions.

If we want to change people's minds, we should be advertising.

I do like long drawn out debates, but most of the time they don't accomplish anything and even when they do, they're a huge use of personal resources.

There is a whole industry centered around changing people's minds effectively. They have expertise in this, and they do it way better than we do.

2Perplexed11yMy guess is that "Harry Potter and the Methods of Rationality" is the best piece of publicity the SIAI has ever produced. I think that the only way to top it would be a Singularity/FAI-themed computer game. How about a turn-based strategy game where the object is to get deep enough into the singularity to upload yourself before a uFAI shows up and turns the universe into paper clips? Maybe it would work, and maybe not, but I think that the demographic we want to reach is 4chan - teenage hackers. We need to tap into the "Dark Side" of the Cyberculture.

How about a turn-based strategy game where the object is to get deep enough into the singularity to upload yourself before a uFAI shows up and turns the universe into paper clips?

I don't think that would be very helpful. Advocating rationality (even through Harry Potter fanfiction) helps because people are better at thinking about the future and existential risks when they care about and understand rationality. But spreading singularity memes as a kind of literary genre won't do that. (With all due respect, your idea doesn't even make sense: I don't think "deep enough into the singularity" means anything with respect to what we actually talk about as the "singularity" here (successfully launching a Friendly singularity probably means the world is going to be remade in weeks or days or hours or minutes, and it probably means we're through with having to manually save the world from any remaining threats), and if a uFAI wants to turn the universe into paperclips, then you're screwed anyway, because the computer you just uploaded yourself into is part of the universe.)

Unfortunately, I don't think we can get people excited about bringing about a Friendly singular... (read more)

3Perplexed11yI am impressed. A serious and thoughtful reply to a maybe serious, but definitely not thoughtful, suggestion. Thank you. "Actively evil" is not "inherently evil". The action currently is over on the evil side because the establishment is boring. Anti-establishment evil is currently more fun. But what happens if the establishment becomes evil and boring? Could happen on the way to a friendly singularity. Don't rule any strategies out. Thwarting a nascent uFAI may be one of the steps we need to take along the path to FAI.
5ata11yThank you for taking it well; sometimes I still get nervous about criticizing. :) I've heard the /b/ / "Anonymous" culture described as Chaotic Neutral, which seems apt. My main concern is that waiting for the right thing to become fun for them to rebel against is not efficient. (Example: Anonymous's movement against Scientology began not in any of the preceding years when Scientology was just as harmful as always, but only once they got an embarrassing video of Tom Cruise taken down from YouTube. "Project Chanology" began not as anything altruistic, but as a morally-neutral rebellion against what was perceived as anti-lulz. It did eventually grow into a larger movement including people who had never heard of "Anonymous" before, people who actually were in it to make the world a better place whether the process was funny or not. These people were often dismissed as "moralfags" by the 4chan old-timers.) Indeed they are not inherently evil, but when morality is not a strong consideration one way or the other, it's too easy for evil to be more fun than good. I would not rely on them (or even expect them) to accomplish any long-term good when that's not what they're optimizing for. (And there's the usual "herding cats" problem — even if something would normally seem fun to them, they're not going to be interested if they get the sense that someone is trying to use them.) Maybe some useful goal that appeals to their sensibilities will eventually present itself, but for now, if we're thinking about where to direct limited resources and time and attention, putting forth the 4chan crowd as a good target demographic seems like a privileged hypothesis. "Teenage hackers" are great (I was one!), but I'm not sure about reaching out to them once they're already involved in 4chan-type cultures. There are probably better times and places to get smart young people interested.
[-][anonymous]11y 3

I am actually interested in, among other things, whether a lack of diversity is a >functional impairment for a group. I feel strongly that it is, but I can't back up that >claim with evidence strong enough to match my belief. For a group such as Less >Wrong, I have to ask what we miss due to a lack of diversity.

To little memetic diversity is clearly a bad thing, for the same reason too little genetic variability. However how much and what kind are optimal depends on the environment.

Also have you considered the possibility that diversity for y... (read more)

2datadataeverywhere11yDiversity is a value for me, but I'd like to believe that is more than simply an aesthetic value. Of course, if wishes were horses we'd all be eating steak. Memetic diversity is one of the non-aesthetic arguments I can imagine, and my question is partially related to that. Genetic diversity is superfluous past a certain point, so it seems reasonable that the same might be true of memetic diversity. Where is that point relative to where Less Wrong sits? Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong. What does that have to do with my cultural background or the typical mind fallacy? What part of that do you disagree with?
7[anonymous]11yWell I will try to elaborate. After I read this it struck me that you may value a much smaller space of diversity than I do. And that you probably value the very particular kinds of diversity (race, gender,some types of culture) much more or even perhaps to the exclusion of others (non-neurotypical, ideological and especially values). I'm not saying you don't (I can't know this) or that you should. I at first assumed you thought the way you do because you came up with a system more or less similar to my own, a incredibly unlikely event, that is why I scolded myself for employing the mind projection fallacy while providing a link pointing that this particular component is firmly integrated into the whole "stuff White people like" (for lack of a better word) culture that exists in the West so anyone I encounter online with whom I share the desire for certain spaces of diversity is on average overwhelmingly more likely to get it from that memplex. Also while I'm certainly sympathetic about hoping one's values are practical, but one needs to learn to live with the possibility one's values are neutral or even impractical or perhaps conflicting with each other. I overall in principle support efforts to lower unnecessary barriers for people to join Lesswrong.But the OP doesn't seem to make it explicit that this is about values, and you wanting other Lesswrongers to live by your values but seems to communicate that its about it being the optimal course of improving rationality. You haven't done this. Your argument so far has been to simply go from: "arbitrary designated group/blacks/women are capable of rationality, but are underrepresented on Lesswrong" to "Lesswrong needs to divert some (as much as needed?) efforts to correct this." Why? Like I said lowering unnecessary barriers (actually you at this point even have to make the case that they exist and that they aren't simply the result of the other factors I described in the post) won't repel the people who a


After I read this it struck me that you may value a much smaller space of diversity than I do. And that you probably value the very particular kinds of diversity (race, gender,some types of culture) much more or even perhaps to the exclusion of others (non-neurotypical, ideological and especially values).

There is a fascinating question that I've asked many times in many different venues, and never received anything approaching a coherent answer. Namely, among all the possible criteria for categorizing people, which particular ones are supposed to have moral, political, and ideological relevance? In the Western world nowadays, there exists a near-consensus that when it comes to certain ways of categorizing humans, we should be concerned if significant inequality and lack of political and other representation is correlated with these categories, we should condemn discrimination on the basis of them, and we should value diversity as measured by them. But what exact principle determines which categories should be assigned such value, and which not?

I am sure that a complete and accurate answer to this question would open a floodgate of insight about the modern society.... (read more)

3NancyLebovitz11yThat's intriguing. Would you care to mention some of the sorts of diversity which usually aren't on the radar?
3AdeleneDawner11yI've spent some time thinking about this, and my conclusion is that, at least personally, what I value about diversity is the variety of worldviews that it leads to. This does result in some rather interesting issues, though. For example, one of the major factors in the difference in worldview between dark-skinned Americans and light-skinned Americans is the existence of racism, both overt and institutional. Thus, if I consider diversity to be very valuable, it seems that I should support racism. I don't, though - instead, I consider that the relevant preferences of dark-skinned Americans take precedence over my own preference for diversity. (Similarly, left-handed peoples' preference for non-abusive writing education appropriately took precedence over the cultural preference for everyone to write with their right hands, and left-handedness is, to the best of my knowledge, no longer a significant source of diversity of worldview.) That assumes coherence in the relevant group's preference, though, which isn't always the case. For example, among people with disabilities, there are two common views that are, given limited resources, significantly conflicting: The view that disabilities should be cured and that people with disabilities should strive to be (or appear to be) as normal as possible, and the view that disabilities should be accepted and that people with disabilities should be free to focus on personal goals rather than being expected to devote a significant amount of effort to mitigating or hiding their disabilities. In such cases, I support the preference that's more like the latter, though I do prefer to leave the option open for people with the first preference to pursue that on a personal level (meaning I'd support the preference 'I'd prefer to have my disability cured', but not 'I'd prefer for my young teen's disability to be treated even though they object', and I'm still thinking about the grey area in the middle where such things as 'I'd prefer for

With your first example, I think you're on to an important politically incorrect truth, namely that the existence of diverse worldviews requires a certain degree of separation, and "diversity" in the sense of every place and institution containing a representative mix of people can exist only if a uniform worldview is imposed on all of them.

Let me illustrate using a mundane and non-ideological example. I once read a story about a neighborhood populated mostly by blue-collar folks with a strong do-it-yourself ethos, many of whom liked to work on their cars in their driveways. At some point, however, the real estate trends led to an increasing number of white collar yuppie types moving in from a nearby fancier neighborhood, for whom this was a ghastly and disreputable sight. Eventually, they managed to pass a local ordinance banning mechanical work in front yards, to the great chagrin of the older residents.

Therefore, when these two sorts of people lived in separate places, there was on the whole a diversity of worldview with regards to this particular issue, but when they got mixed together, this led to a conflict situation that could only end up with one or another view being imposed on everyone. And since people's worldviews manifest themselves in all kinds of ways that necessarily create conflict in case of differences, this clearly has implications that give the present notion of "diversity" at least a slight Orwellian whiff.

3wedrifid11yMy experience is similar. Even people that are usually extremely rational go loopy. I seem to recall one post there that specifically targeted the issue. But you did ask "what basis should" while Robin was just asserting a controversial is.
3Vladimir_M11ywedrifid: I probably didn't word my above comment very well. I am also asking only for an accurate description of the controversial "is." The fact is that nearly all people attach great moral importance to these issues, and what I'd like (at least for start) is for them to state the "shoulds" they believe in clearly, comprehensively, and coherently, and to explain the exact principles with which they justify these "shoulds." My above stated questions should be understood in these terms.
0wedrifid11yIf you are sufficiently curious you could make a post here. People will be somewhat motivated to tone down the hysteria given that you will have pre-emptively shunned it.
5datadataeverywhere11yI think I'm going to stop responding to this thread, because everyone seems to be assuming I'm meaning or asking something that I'm not. I'm obviously having some problems expressing myself, and I apologize for the confusion that I caused. Let me try once more to clarify my position and intentions: I don't really care how diverse Less Wrong is. I was, however, curious how diverse the community is along various axes, and was interested in sparking a conversation along those lines. Vladimir's [http://lesswrong.com/lw/2nz/less_wrong_open_thread_september_2010/2n26?c=1] comment is exactly the kind of questions I was trying to encourage, but instead I feel like I've been asked to defend criticism that I never thought I made in the first place. I was never trying to say that there was something wrong with the way that Less Wrong is, or that we ought to do things to change our makeup. Maybe it would be good for us to, but that had nothing to do with my question. I was instead (trying to, and apparently badly) asking for people's opinions about whether or how our makeup along any partition --- the ones that I mentioned or others --- effect in us an inability to best solve the problems that we are interested in solving.
4[anonymous]11y"Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong. What does that have to do with my cultural background or the typical mind fallacy? What part of that do you disagree with?" To get back to basics for a moment: we don't know that women and black people are underrepresented here. Usernames are anonymous. Even if we suspect they're underrepresented, we don't know by how much -- or whether they're underrepresented compared to the internet in general, or the geek cluster, or what. Even assuming you want more demographic diversity on LW, it's not at all clear that the best way to get it is by doing something differently on LW itself.
0[anonymous]11yYou highlighted this point much better than I did.
2wedrifid11y"Ought"? I say it 'ought' to be explained away be the subject matter of less wrong if and only if that is an accurate explanation. Truth isn't normative.
4datadataeverywhere11yIs this a language issue? Am I using "ought" incorrectly? I'm claiming that the truth of the matter is that women are capable of rationality, and have a place here, so it would be wrong (in both an absolute and a moral sense) to claim that their lack of presence is due to this being a blog about rationality. Perhaps I should weaken my statement to say "if women are as capable as men in rationality, their underrepresentation here ought not be explained away by the subject matter". I'm not sure whether I feel like I should or shouldn't apologize for taking the premise of that sentence as a given, but I did, hence my statement.
2wedrifid11yAhh, ok. That seems reasonable. I had got the impression that you had taken the premise for granted primarily because it would be objectionable if it was not true and the fact of the matter was an afterthought. Probably because that's the kind of reasoning I usually see from other people of your species. I'm not going to comment either way about the premise except to say that it is inclination and not capability that is relevant here.

Nine years ago today, I was just beginning my post-graduate studies. I was running around campus trying to take care of some registration stuff when I heard that unknown parties had flown two airliners into the WTC towers. It was surreal -- at that moment, we had no idea who had done it, or why, or whether there were more planes in the air that would be used as missiles.

It was big news, and it's worth recalling this extraordinarily terrible event. But there are many more ordinary terrible events that occur every day, and kill far more people. I want to kee... (read more)

Okay, I guess I missed what you were implicitly curious about.

Well, that's the key thing for me -- not "How smart is Marcello now?", but how many people were at least at Marcello's level at that time, yet not patiently taken under EY's wing and given his precious time?

At the time there wasn't a Visiting Fellows program or the like (I think), and there were a lot fewer potential FAI researchers then than now. However, I get the impression that Marcello was and is an exceptional rationalist. 'Course, I share your confusion that Eliezer would be... (read more)

It took me a bit to figure out you meant neurotypical rather than iNtuitive-Thinking.

I think everyone would rather get what they want without having to take the trouble of asking for it clearly. In extreme cases, they don't even want to take the trouble to formulate what they want clearly to themselves.

And, yeah, flamebaitish. I don't know if you've read accounts by women who've been abused by male partners, but one common feature of the men is expecting to automatically get what they want.

It would be interesting to look at whether some behavior which is considered abusive by men is considered annoying but tolerable if it's done by women. Of course the degree of enforcement matters.

If I had a secret project to gain power over the future of mankind, the last thing I would do is publish any sort of marketing materials, real or fake, that even hinted at the overall objective or methods of the project.

Your point about the thousand dollars. Well, in the first place, I didn't say "control". I said "have enormous power over" if your ideals match up with Eliezer's.

In the second place, if you feel that a certain amount of hyperbole for dramatic effect is completely inappropriate in a discussion of this importance, then I will apologize for mine and I will accept your apology for yours.

The Science of Word Recognition, by a Microsoft researcher, contains tales of reasonably well done Science gone persistently awry, to the point that the discredited version is today the most popular one.

4Clippy11yThat's a really good article, the Microsoft humans really know their stuff.

So what else could we also accomplish? I didn't read it as 'wikipedia could be 2,000 times better', but 'we could have 2,000 wikipedia-grade resources'. (Which is probably also not true - we'd run out of low-hanging fruit. Still.)

I've only ever heard Bayesian pronounced "Bay-zian".

3ata11yThat's how I usually hear it ("Bayes"+"ian", right?), though I've also heard it pronounced like "Basian" (rhyming with "Asian") or occasionally "Bay-esian" (rhyming with "Cartesian").

Apologies if this question seems naive but I would really appreciate your wisdom.

Is there a reasonable way of applying probability to analogue inference problems?

For example, if two substances A and B are being measured using a device which produces an analogue value C. Given a history of analogue values, how does one determine the probability of each substance. Unless the analogue values match exactly, how can historical information contribute to the answer without making assumptions of the shape of the probability density function created by A or B? If... (read more)

5Perplexed11yYour examples, certainly show a grasp of the problem. The solution is first sketched in Chapter 4.6 of Jaynes [http://www-biba.inrialpes.fr/Jaynes/prob.html] Definitely. Jaynes finishes deriving the inference rules in Chapter 2 and illustrates how to use them in Chapter 3. The remainder of the book deals with "the real challenge". In particular Chapters 6, 7, 12, 19, and especially 20. In effect, you use Bayesian inference and/or Wald decision theory to choose between underlying models pretty much as you might have used them to choose between simple hypotheses. But there are subtleties, ... to put things mildly. But then classical statistics has its subtleties too.

Do you have a reason of sarcasm?

It felt like irony from my end - a satire of human behaviour.

As a general tendency of humanity we seem to be more inclined to be abhored by beliefs that are similar to what we consider the norm but just slightly different. It is the rebels within the tribe that are the biggest threat, not the tribe that lives 20 kms away.

I hope someone can give you an adequate answer to your question. The very short one is that empirical evidence is usually going to be the most heavily weighted 'bayesian' (rational) evidence. However everything else is still evidence, even though it is far weaker.

Just that once. That comment was, roughly paraphrased, "would you like me if I said X", which was quite Clippy-esque. Less so when seen in context.

You plan attend?

No, I don't live in Australia ... except in whatever sense is necessary for you humans not to hate me c=/

4khafra11yHere's [http://www.youtube.com/watch?v=gYxEIyNA_mk] an important lesson in human social signalling.
1MartinB11yNow you confuse. Elaborate;

Well, I said I don't live in Australia (and I don't live in Australia), and I got -8 points, and so I said I live in Australia instead, and that also got me negative points. I don't know what I'm supposed to say for you humans to not dislike me!!!

Downvotes don't mean "I don't like you", they mean "I'd like to see fewer comments like this one".

7Clippy11yWell ... they make me feel unliked (_/

Is that an emoticon of a partially unbent paperclip? How gruesome!

4Clippy11yIt's just an abstract depiction. It's not like those awful videos you humans allow on the internet that show a paperclip being repeatedly bent until it has a fatigue failure. Yuck!
5wedrifid11yAlthough I suspect they sometimes mean "I'd like to see fewer comments like this one where 'like' includes 'by this author' because I don't like him!"
4Morendil11yThere's a solution [http://lesswrong.com/lw/1s/lesswrong_antikibitzer_hides_comment_authors_and/1hvk?c=1] for that. (At least a partial solution. As a long-time AK user I find that I can more and more reliably identify a few commenters by their style. Even so I typically vote on substance alone.)

Your ability to convincingly signal ape-distress has become quite impressive. I am slightly more scared of you than I once was.

I don’t care if AI is Friendly or not. [...] I am mainly interested in that whatever AI we create does not paperclip the universe

You contradict yourself here. A Friendly AI is an intelligence which attempts to improve the well-being of humanity. A paperclip maximiser is an intelligence which does not, as it cares about something different and unrelated. Any sufficiently advanced AI is either one or the other or somewhere in between.

By "sufficiently advanced", I mean an AI which is intelligent enough to consider the future of humanity and attempt to influence it.

5PhilGoetz11yNo; these are two types of AIs out of a larger design space. You ignore, at the very least, the most important and most desirable case: An AI that shares many of humanity's values, and attempts to achieve those values rather than increase the well-being of humanity.

Can you provide a cite for the notion that Eliezer believes (2)? Since he's not likely to build the world's first FAI in his garage all by himself, without incorporating the work of any of other thousands of people working on FAI and FAI's necessary component technologies, I think it would be a bit delusional of him to beleive (2) as stated. Which is not to suggest that his work is not important, or even among the most significant work done in the history of humankind (even if he fails, others can build on that and find the way that works). But that's different than the idea that he, alone, is The Most Significant Human Who Will Ever Live. I don't get the impression that he's that cocky.

2James_Miller11yEliezer has been accused on LW of having or possibly having delusions of grandeur for essentially believing in (2). See here: http://lesswrong.com/lw/2lr/the_importance_of_selfdoubt/ [http://lesswrong.com/lw/2lr/the_importance_of_selfdoubt/] My main point is that even if Eliezer believes in (2) we can't conclude that he has such delusions unless we also accept that many LW readers also have such delusions.

If N is at least 18 it’s hard to think of a rational criteria under which believing you are 1 in 10^N is delusional whereas thinking you are 1 in 10^(N-12) is not.

Really? How about "when you are, in fact, 1/10^(N-12) and have good reason to believe it"? Throwing in a large N doesn't change the fact that 10^N is still 1,000,000,000,000 times larger than 10^(N-12) and nor does it mean we could not draw conclusions about belief (2).

(Not commenting on Eliezer here, just suggesting the argument is not all that persuasive to me.)

2Snowyowl11yI agree. Somebody has to be the most important person ever. If Elizer really has made significant contributions to the future of humanity, he's much more likely to be that most important person than a random person out of 10^N candidates would be.
1James_Miller11yThe argument would be that Eliezer should doubt his own ability to reason if his reason appears to cause him to think he is 1 in 10^N. My claim is that if this argument is true everyone who believes in (1) and thinks N is large should, to an extremely close approximation, have just as much doubt in their own ability to reason as Eliezer should have in his.
1Snowyowl11yAgreed. Not sure if Eliezer actually believes that, but I take your point.
2James_Miller11yTo an extremely good approximation one in a million events don't ever happen.
3wedrifid11yTo an extremely good approximation this Everett Branch doesn't even exist. Well, it wouldn't if I used your definition of 'extremely good'.
1James_Miller11yYour argument seems to be analogous to the false claim that it's remarkable that a golf ball landed exactly where it did (regardless of where it did land) because the odds of that happening were extremely small. I don't think my argument is analogous because there is reason to think that being one of the most important people to ever live is a special happening clearly distinguishable from many, many others.
1gwern11yYet they are quite easy to generate - flip a coin a few times.

Since the Open Thread is necessarily a mixed bag anyway, hopefully it's OK if I test Markdown here

test deleted

I have been following this site for almost a year now and it is fabulous, but I haven't felt an urgent need to post to the site until now. I've been working on a climate change project with a couple of others and am in desperate need of some feedback.

I know that climate change isn't a particularly popular topic on this website (but I'm not sure why, maybe I missed something, since much of the website seems to deal with existential risk. Am I really off track here?), but I thought this would be a great place to air these ideas. Our approach tries to tackl... (read more)

The gap between inventing formal logic and understanding human intelligence is as large as the gap between inventing formal grammars and understanding human language.

1Vladimir_Nesov11yHuman intelligence, certainly; but just intelligence, I'm not so sure.

Right, but it seemed you were comparing the selection criteria for Visiting Fellowship and the selection criteria for Eliezer's FAI team, which will of course be very different. Perhaps I misunderstood. I've been taking oxycodone every few hours for a lot of hours now.

What trivial thing am I slow(er) to learn here?

That Marcello's "lapse" is only very weak evidence against the proposition that his IQ is exceptionally high (even among the "aspiring rationalist" cluster).

3Eliezer Yudkowsky11yWhat lapse? People don't know these things until I explain them! Have you been in a mental state of having-already-read-LW for so long that you've forgotten that no one from outside would be expected to spontaneously describe in Bayesian terms the problem with saying that "complexity" explains something? Someone who'd already invented from scratch everything I had to teach wouldn't be taken as an apprentice, they'd already be me! And if they were 17 at the time then I'd probably be working for them in a few years!

What lapse? People don't know these things until I explain them!

A little over-the-top there. People can see the problem with proposing "complexity" as a problem-solving approach without having read your work. I hadn't yet read your work on Bayescraft when I saw that article, and I still cringed as I read Marcello's response -- I even remember previous encounters where people had proposed "solutions" like that, though I'd perhaps explain the error differently.

It is a lapse to regard "complexity" as a problem-solving approach, even if you are unfamiliar with Bayescraft, and yes, even if you are unfamiliar with the Chuck Norris of thinking.

6bentarm11ySeriously? What sort of outside-LW people do you talk to? I'm a PhD student in a fairly mediocre maths department, and I'm pretty sure everyone in the room I'm in right now would call me out on it if I tried to use the word "complexity" in the context Marcelo did there as if it actually meant something, and for essentially the right reason. This might be a consequence of us being mathematicians, and so used to thinking in formalism, but there are an awful lot of professional mathematicians out there who haven't read anything written by Eliezer Yudkowsky. I'm sorry but "there's got to be some amount of complexity that does it." is just obviously meaningless. I could have told you this long before I read the sequences, and definitely when I was 17. I think you massively underestimate the rationality of humanity.
1komponisto11yScarequotes added. :-)
1SilasBarta11yThanks for spelling that out, because it wasn't my argument, which I clarified in the follow-up discussion. (And I think it would be more accurate to say that it's strong evidence, just outweighed by stronger existing evidence in this case.) My surprise was with how rare EY found it to meet someone who could follow that explanation -- let alone need the explanation. A surprise that, it turns out, is shared [http://lesswrong.com/lw/2nz/less_wrong_open_thread_september_2010/2l5m?c=1] by the very person correcting my foolish error. Can we agree that the comparison EY just made isn't accurate?
6komponisto11yThis is where you commit the fundamental attribution error. I don't actually think this has been written about much here, but there is a tendency among high-IQ folks to underestimate how rare their abilities are. The way they do this is not by underestimating their own cognitive skills, but instead by overestimating those of most people. In other words, what it feels like to be a genius is not that you're really smart, but rather that everyone else is really dumb. I would expect that both you and Will would see the light on this if you spent some more time probing the thought processes of people of "normal" intelligence in detail, e.g. by teaching them mathematics (in a setting where they were obliged to seriously attempt to learn it, such as a college course; and where you were an authority figure, such as the instructor of such a course). Probably not literally, in light of your clarification. However, I nevertheless suspect that your responses in this thread do tend to indicate that you would probably not be particularly suited to being (for example) EY's apprentice -- because I suspect there's a certain...docility that someone in that position would need, which you don't seem to possess. Of course that's a matter of temperament more than intelligence.
1SilasBarta11yI'm missing something here, I guess. What fraction of people who, as a matter of routine, speak of "complexity" as a viable problem-attack method, and are also very intelligent? If it's small, then it's appropriate to say, as I suggested, that it's strong evidence, even as it might be outweighed by something else in this case. Either way, I'm just not seeing how I'm, per the FEA, failing to account for some special situational justification for what Marcello did. Well, I do admit to having experienced disenchantment upon learning where the average person is on analytical capability (Let's not forget where I live...) Still, I don't think teaching math would prove it to me. As I say here ad infinitum, I just don't find it hard to explain topics I understand -- I just trace back to the nepocu (nearest point of common understanding), correct their misconceptions, and work back from there. So in all my experience with explaining math to people who e.g. didn't complete high school, I've never had any difficulty. For the past five years I've helped out with math in a 4th grade class in a poorer school district, and I've never gotten frustrated at a student's stupidity -- I just teach whatever they didn't catch in class, and fix the misunderstanding relatively quickly. (I don't know if the age group breaks the criteria you gave). Eh, I wasn't proposing otherwise -- I've embarassed myself here far too many times to be regarded as someone that group would want to work with in person. Still, I can be perplexed at what skills they regard as rare.
0[anonymous]11yI'm missing something here, I guess. What fraction of people who, as a matter of routine, speak of "complexity" as a viable problem-attack method, and are also very intelligent? If it's small, then it's appropriate to say, as I suggested, that it's strong evidence, even as it might be outweighed by something else in this case. Either way, I'm just not seeing how I'm, per the FEA, failing to account for some special situational justification for what Marcello did. Well, I do admit to having experienced disenchantment upon learning where the average person is (and let's not forget where I live...) Still, I don't think teaching math would make the point. As I say here ad infinitum, I just don't find it hard to explain topics I understand -- I just trace back to the nepocu (nearest point of common understanding), correct their misconceptions, and work back from there. So in all my experience with explaining math to people who e.g. didn't complete high school, I've never had any difficulty. For the past five years I've helped out with math in a 4th grade class in a poorer school district, and I've never gotten frustrated at a student's stupidity -- I just teach whatever they didn't catch in class, and fix the misunderstanding relatively quickly. (I don't know if the age group breaks the criteria you gave). Eh, I wasn't proposing otherwise -- I've embarassed myself here far too many times to be regarded as someone that group would want to work with in person. Still, I can be perplexed at what skills they regard as rare.

FWIW, I know a visiting fellow who took 18 months to be convinced of something trivial, after long explanations from several SIAI greats

What was the trivial thing? Just curious.

"By balance of power between AIs, each of whom exist only with the aquiescence of coalitions of their fellows." That is the tentative mechanical answer.

"In exactly the same way that FAI proponents propose to keep their single more-powerful AI friendly; by having lots of smart people think about it very carefully; before actually building the AI(s)". That is the real answer.

I think a major problem with that is that most players would simply rely upon the word on the street to tell them what was currently effective, rather than performing experiments themselves. Furthermore, changes in only "effectiveness" would probably be too easy to discover using a "cookbook" of experiments (see the NetHack discussion in this thread).

1Oscar_Cunningham11yI'm thinking that the parameters should change just quickly enough to stop consensus forming (maybe it could be driven by negative feedback, so that once enough people are playing one strategy it becomes ineffective). Make using a cookbook expensive. Winning should be difficult, and only just the right combination will succeed.
2DSimon11yI think this makes sense, but can you go into more detail about this: I didn't mean a cookbook as an in-game item (I'm not sure if that's what you were implying...), I meant the term to mean a set of well-known experiments which can simply be re-ran every time new results are required. If the game can be reduced to that state, then a lot of its value as a rationality teaching tool (and also as an interesting game, to me at least) is lost. How can we force the player to have to come up with new ideas for experiments, and see some of those ideas fail in subtle ways that require insight to understand? My tendency is to want to solve this problem by just making a short game, so that there's no need to figure out how to create a whole new, interesting experimental space for each session. This would be problematic in an MMO, where replayablity is expected (though there have been some interesting exceptions, like Uru).
3Oscar_Cunningham11yAh, I meant: "Make each item valuable enough that using several just to work out how effective each one is would be a fatal mistake" Instead you would have to keep track of how effective each one was, or watch the other players for hints.
0taryneast11yHmmm - changing things frequently means you'll have some negative knock-on effects. You'll be penalising anybody that doesn't game as often - eg people with a life. You stand a chance of alienating a large percentage of the audience, which is not a good idea.

Not universally, only (mostly) to the extent that they expect them to actually get it right, and regarding currently existing wants, not what they should want (would want to want if only they were smart enough etc.).

2SilasBarta11yAh, good point. I stand corrected.

I didn't get ahold of vials that would shatter on impact before the game fizzled out (a notorious play-by-post problem). I did at one time get to use Lucky as a weapon, though. Sadly, my character was not proficient with rats.

3CronoDAS11yIt's a rat-flail! [http://www.vgcats.com/comics/?strip_id=110]
4Alicorn11yNah, I used him as a thrown weapon. (He was fine and I retrieved him later.)

How do I associate a sequence of cards with a string? It doesn't seem like there is any canonical way of doing this. Maybe it won't matter that much in the end [...]

Just so: the exact representation used is usually not that critical.

If as you say you are using Solomonoff induction, the next step is to compress it - so any fancy encoding scheme you use will probably be stripped right off again.

Friday's Wondermark comic discusses a possible philosophical paradox that's similar to those mentioned at Trust in Bayes and Exterminating life is rational.

1Nisan11yYou beat me to it :)

Recently there was a discussion regarding Sex at Dawn. I recently skimmed this book at a friend's house, and realized that the central idea of the book is dependent on a group selection hypothesis. (The idea being that our noble savage bonobo-like hunter-gatherer ancestors evolved a preference for paternal uncertainty as this led to better in group cooperation.) This was never stated in the sequence of posts on the book. Can someone who has read the book confirm/deny the accuracy of my impression that the book's thesis relies on a group selection hypothesis?

Since Eliezer has talked about the truth of reductionism and the emptiness of "emergence", I thought of him when listening to Robert Laughlin on EconTalk (near the end of the podcast). Laughlin was arguing that reductionism is experimentally wrong and that everything, including the universal laws of physics, are really emergent. I'm not sure if that means "elephants all the way down" or what.

3Will_Sawin11yIt's very silly. What he's saying is that there are properties at high levels of organizations that don't exist at low levels of organizations. As Eliezer says, emergence is trivial. Everything that isn't quarks is emergent. His "universality" argument seems to be that different parts can make the same whole. Well of course they can. He certainly doesn't make any coherent arguments. Maybe he does in his book?
3Perplexed11yYet another example of a Nobel prize winner in disagreement with Eliezer within his own discipline. What is wrong with these guys? Why if they would just read the sequences, they would learn the correct way for words like "reduction" and "emergence" to be used in physics.
4khafra11yTo be fair, "reductionism is experimentally wrong" is a statement that would raise some argument among Nobel laureates as well.
2Perplexed11yArgument from some Nobelists. But agreement from others. Google on the string "Philip Anderson reductionism emergence" to get some understanding of what the argument is about. My feeling is that everyone in this debate is correct, including Eliezer, except for one thing - you have to realize that different people use the words "reductionism" and "emergence" differently. And the way Eliezer defines them is definitely different from the way the words are used (by Anderson, for example) in condensed matter physics.
2khafra11yIf the first hit [http://cs.calstatela.edu/wiki/images/1/19/Reductionism_and_levels_of_abstractions.doc] is a fair overview, I can see why you're saying it's a confusion in terms; the only outright error I saw was confusing "derivable" with "trivially derivable." If you're saying that nobody important really tries to explain things by just saying "emergence" and handwaving the details, like EY has suggested, you may be right. I can't recall seeing it. Of course, I don't think Eliezer (or any other reductionist) has said that throwing away information so you can use simpler math isn't useful when you're using limited computational power to understand systems which would be intractable from a quantum perspective, like everything we deal with in real life.

Advertising/marketing. Short of ashiest bus ads, I can't think of anything that's been done.

All I'm really suggesting is that we focus on mass persuasion in the way it has been proven to be most efficient. What that actually amounts to will depend on the target audience, and how much money is available, among other things.

2jacob_cannell11yDid you mean "atheist bus ads"? I actually find strict-universal-atheism to be irrational compared to agnosticism because of the SA and the importance of knowing the limits of certainty, but that's unrelated and I digress. I've long suspected that writing popular books on the subject would be an effective strategy for mass persuasion. Kurzweil has certainly had a history of some success there, although he also brings some negative publicity due to his association with dubious supplements and the expensive SingUniversity. It will be interesting to see how EY's book turns out and is received. I'm actually skeptical about how far rationality itself can go towards mass persuasion. Building a rational case is certainly important, but the content of your case is even more important (regardless of its rationality). On that note I suspect that bridging a connection to the mainstream's beliefs and values would go a ways towards increasing mass marketability. You have to consider not just the rationality of ideas, but the utility of ideas. It would be interesting to analyze and compare how emphasizing the hope vs doom aspects of the message would effect popularity. SIAI at the moment appears focused on emphasizing doom and targeting a narrow market: a subset of technophile 'rationalists' or atheist intellectuals and wooing academia in particular. I'm interested in how you'd target mainstream liberal christians or new agers, for example, or even just the intellectual agnostic/atheist mainstream - the types of people who buy books such as the End of Faith, Breaking the Spell, etc etc. Although a good portion of that latter demographic is probably already exposed to the Singularity is Near.

A question about modal logics.

Temporal logics are quite successful in terms of expressiveness and applications in computer science, so I thought I'd take a look at some other modal logics - in particular deontic logic that deal with obligations, rules, and deontological ethics.

It seems like an obvious approach, as we want to have "is"-statements, "ought"-statements, and statements relating what "is" with what "ought" to be.

What I found was rather disastrous, far worse than with neat and unambiguous temporal logics. L... (read more)

Someone made a page that automatically collects high karma comments. Could someone point me at it please?

1Kazuo_Thow11yHere's [http://lesswrong.com/lw/2bi/open_thread_june_2010_part_2/253w?c=1] the Open Thread comment where Daniel Varga made the page [http://people.mokk.bme.hu/~daniel/rationality_quotes/rq.html] and its source code public. I don't know how often it's updated.
1wedrifid11yThey did? I've been wishing for something like that myself. I'd also like another page that collects just my high karma comments. Extremely useful feedback!

By "pure Bayesianism", I meant the attitude expressed in Chapter 13 of Jaynes, near the end in the section entitled "Comments" and particularly the subsection at the very end entitled "Another dimension?". A pure "Jaynes Bayesian" seeks the truth, not because it is useful, but rather because it is truth.

By contrast, we might consider a "de Finetti Bayesian" who seeks the truth so as not to lose bets to Dutch bookies, or a "Wald Bayesian" who seeks truth to avoid loss of utility. The Wald Bayesian... (read more)

1timtyler11yA truth seeker! Truth seeking is certainly pretty bizarre and unbiological. Agents can normally be expected to concentrate on making babies - not on seeking holy grails.

Roger Schlafly. Or Roger Schlafly, if you prefer that. His blog is Singular Values. His whole family is full of very interesting people.

The penny has just dropped! When I first encountered LessWrong, the word 'Rationality' did not stand out. I interpreted it to mean its everyday meaning of careful, intelligent, sane, informed thought (in keeping with 'avoiding bias'). But I have become more and more uncomfortable with the word because I see it having a more restricted meaning in the LW context. At first, I thought this was an economic definition of the 'rational' behaviour of the selfish and unemotional ideal economic agent. But now I sense an even more disturbing definition: rational as opposed to empirical. As I use scientific evidence as the most important arbiter of what I believe, I would find the anti-empirical idea of 'rational' a big mistake.

3thomblake11yThe philosophical tradition of 'Rationalism' (opposed to 'Empiricism') is not relevant to the meaning here. Though there is some relationship between it and "Traditional Rationality" which is referenced sometimes.
2kodos9611yUmmmmmmmm.... no. The word "rational" is used here on LW in essentially its literal definition (which is not quite the same as its colloquial everyday meaning).... if anything it is perhaps used by some to mean "bayesian"... but bayesianism is all about updating on (empirical) evidence.
1JanetK11yAccording to my dictionary: rationalism 1. Philos. the theory that reason is the foundation of certainty in knowledge (opp. empiricism, sensationalism) This is there as well as: rational 1. of or based on reasoning or reason So although there are other (more everyday) definitions also listed at later numbers, the opposition to empirical is one of the literal definitions. The Bayesian updating thing is why it took me a long time to notice the other anti-scientific tendency.
4timtyler11yI wouldn't say "anti-scientific" - but it certainly would be good if scientists actually studied rationality more - and so were more rational. With lab equipment like the human brain, you have really got to look into its strengths and weaknesses - and read the manual about how to use it properly. Personally, when I see material likeScience or Bayes [http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/] - my brain screams: false dichotomy: Science and Bayes! Don't turn the scientists into a rival camp: teach them.
2wedrifid11yIndeed. It is heretic in the extreme! Burn them!
2Emile11yI don't think that's how most people here understand "rationalism".
1timtyler11yThere is at least one post about that [http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/] - though I don't entirely approve of it. Occam's razor is not exactly empirical. Evidence is involved - but it does let you choose between two theories both of which are compatible with the evidence without doing further observations. It is not empirical - in that sense.
1Kenny11yOccam's razor isn't empirical, but it is the economically rational decision when you need to use one of several alternative theories (that are exactly "compatible with the evidence"). Besides, "further observations" are inevitable if any of your theories are actually going to be used (i.e. to make predictions [that are going to be subsequently 'tested']).

Grab the popcorn! Landsburg and I go at it again! (See also Previous Landsburg LW flamewar.)

This time, you get to see Landsburg:

  • attempt to prove the existence of the natural numbers while explicitly dismissing the relevance of what sense he's using "existence" to mean!
  • use formal definitions to make claims about the informal meanings of the terms!
  • claim that Peano arithmetic exists "because you can see the marks on paper" (guess it's not a platonic object anymore...)!

(Sorry, XiXiDu, I'll reply to you on his blog if my posting priv... (read more)

3DanielVarga11yWow, a debate where the most reasonable-sounding person is a sysop of Conservapedia. :)

You have merely redefined the goal from 'the benefit of humanity' to 'non dead-end goal', which may just be equally hairy.

3billswift11yEven more hairy. Any primary goal will, I think, eventually end up with a paperclipper. We need more research into how intelligent beings (ie, humans) actually function. I do not think people, with rare exceptions, actually have primary goals, only temporary, contingent goals to meet temporary ends. That is one reason I don't think much of utilitarianism - peoples "utilities" are almost always temporary, contingent, and self-limiting. This is also one reason why I have said that I think provably Friendly AI is impossible. I will be glad to be proven wrong if it does turn out to be possible.

It's not about the numbers, and it's not about Eliezer in particular. Think of it this way:

Clearly, the development of interstellar travel (if we successfully accomplish this) will be one of the most important events in the history of the universe.

If I believe our civilization has a chance of achieving this, then in a sense that makes me, as a member of said civilization, important. This is a rational conclusion.

If I believe I'm going to build a starship in my garage, that makes me delusional. The problem isn't the odds against me being the one person who... (read more)

I think I don't understand (1) and its implications. How the fact that in most of the branches we are going extinct implies that we are the most important couple of generations (this is how I interpret the trillion)? Our importance lies in our decisions. These decisions influence the number of branches in which people die out. If we take (1) as given, it means we weren't successful in mitigating the existential risk, leaving no place to excercise our decisions and thus importance.

Ah, I see. I misunderstood your definition of "paperclip maximiser"; I assumed paperclip maximiser and Unfriendly AI were equivalent. Sorry.

Next question: if maximising paperclips or relentless self-optimisation is a dead-end goal, what is an example of a non-dead-end goal? Is there a clear border between the two, which will be obvious to the AI?

To my mind, if paperclip maximisation is a dead end, then so is everything else. The Second Law of Thermodynamics will catch up with you eventually. Nothing you create will endure forever. The only thing you can do is try to maximise your utility for as long as possible, and if that means paperclips, then so be it.

What would be a non-friendly goal that isn't dead-end? (N.B. Not a rhetorical question.)

6NihilCredo11yDeciding that humanity was a poor choice for the dominant sapient species and should be replaced by (the improved descendants of) dolphins or octopi?

Is the Open Thread now deprecated in favour of the Discussion section? If so, I suggest an Open Thread over there for questions not worked out enough for a Discussion post. (I have some.)

The definition of Omega includes him being completely honest and trustworthy. He wouldn't tell you "I will make your afterlife better" unless he knew that there is an afterlife (otherwise he couldn't make it better), just like he wouldn't say "the current Roman Emperor is bald". If he were to say instead "I will make your afterlife better, if you have one", I would keep operating on my current assumption that there is no such thing as an afterlife.

Oh, I almost forgot - what does it even mean to "believe in science"?

It might have been marginally more productive to answer "No, I don't see. Would you explain?"

Actually, it would have been more productive, since you obviously didn't understand what I was saying.

I am not claiming that I have evidence suggesting that culture is a stronger factor in mathematical ability than genetics. What I'm claiming is that I don't know of any evidence to show that the two can be clearly distinguished. Ignorance is a privileged hypothesis. Unless you can show evidence of differences in mathematical ability that can be trace... (read more)

3wedrifid11yNo, I rejected your specific argument because it was by very nature fallacious. There are other things you could have said but didn't and those things I may not have even disagreed with. The conversation was initiated by you admonishing others. You have since then danced the dance of re-framing with some skill. I was actually only at the fringes of the conversation. I haven't said that. Specifics quotations of arguments or reasoning that I reject tend to be included in my comments. Take the above for example. Your reply does not relate rationally to the quote you were replying to. I reject the argument that you were using (which is something I do consistently - I care about bullshit [http://en.wikipedia.org/wiki/On_Bullshit] probably even more than you care about supporting your culture hypothesis). Your response was to weasel your way out of your argument, twist your initial claim such that it has the intellectual high ground, label my disagreement with you a personal flaw, misrepresented my claim to be something that I have not made and then attempt to convey that I have not given any explanation for my position. That covers modules 1, 2, 3 and 4 in "Effective Argument Techniques 101". I don't especially mind the slander but it is essentially futile for me to try to engage with the reasoning. I would have to play the kind of games that I come here to avoid.

>equals(correct_reasoning , Bayesian_inference)

1Clippy11yThis server is really slow.

This seems to have the same problem as teaching evolution in high school biology classes: you can pass a test on something and not believe a word of it. Cracking an information cocoon can be damn hard; just consider how unusual religious conversions are, or how rarely people change their minds on such subjects as UFOs, conspiracy theories, cryonics, or any other subject that attracts cranks.

Also, why should employers care about a person's climate change test score?

Finally, why privilege knowledge about climate change, or all things, by using it for gatekeeping, instead of any of the many non-controversial subjects normally taught in high schools, for which SAT II subject tests already exist?

So it could be that your viewpoint is more likely, and the rest of us are suffering from "anthropomorphic bias", but it also could be that anthropomorphic bias is in fact a self-fulfilling prophecy.

I don't see how. We could get something like that if we get uploads before AGI, but that would really be more like an enhanced human taking over the world. Aside from that, where's the self-fulfilling prophecy? If people expect AGIs to exhibit human-like emotions and primate status drives and go terribly wrong as a result, why does that increase the chance that the creators of the first powerful AGI will build human-like emotions and primate status drives into it?

How diverse is Less Wrong? I am under the impression that we disproportionately consist of 20-35 year old white males, more disproportionately on some axes than on others.

We obviously over-represent atheists, but there are very good reasons for that. Likewise, we are probably over-educated compared to the populations we are drawn from. I venture that we have a fairly weak age bias, and that can be accounted for by generational dispositions toward internet use.

However, if we are predominately white males, why are we? Should that concern us? There's nothing... (read more)

This sounds like the same question as why are there so few top-notch women in STEM fields, why there are so few women listed in Human Accomplishment's indices*, why so few non-whites or non-Asians score 5 on AP Physics, why...

In other words, here be dragons.

* just Lady Murasaki, if you were curious. It would be very amusing to read a review of The Tale of Genji by Eliezer or a LWer. My own reaction by the end was horror.

4datadataeverywhere11yThat's absolutely true. I've worked for two US National Labs, and both were monocultures. At my first job, the only woman in my group (20 or so) was the administrative assistant. At my second, the numbers were better, but at both, there were literally no non-whites in my immediate area. The inability to hire non-citizens contributes to the problem---I worked for Microsoft as well, and all the non-whites were foreign citizens---but it's not as if there aren't any women in the US! It is a nearly intractable problem, and I think I understand it fairly well, but I would very much like to hear the opinion of LWers. My employers have always been very eager to hire women and minorities, but the numbers coming out of computer science programs are abysmal. At Less Wrong, a B.S. or M.S. in a specific field is not a barrier to entry, so our numbers should be slightly better. On the other hand, I have no idea how to go about improving them. The Tale of Genji has gone on my list of books to read. Thanks!
6gwern11yYes, but we are even more extreme in some respects; many CS/philosophy/neurology/etc. majors reject the Strong AI Thesis (I've asked), while it is practically one of our dogmas. I realize that I was a bit of a tease there. It's somewhat off topic, but I'll include (some of) the hasty comments I wrote down immediately upon finishing: The prevalence of poems & puns is quite remarkable. It is also remarkable how tired they all feel; in Genji, poetry has lost its magic and has simply become another stereotyped form of communication, as codified as a letter to the editor or small talk. I feel fortunate that my introductions to Japanese poetry have usually been small anthologies of the greatest poets; had I first encountered court poetry through Genji, I would have been disgusted by the mawkish sentimentality & repetition. The gender dynamics are remarkable. Toward the end, one of the two then main characters becomes frustrated and casually has sex with a serving lady; it's mentioned that he liked sex with her better than with any of the other servants. Much earlier in Genji (it's a good thousand pages, remember), Genji simply rapes a woman, and the central female protagonist, Murasaki, is kidnapped as a girl and he marries her while still what we would consider a child. (I forget whether Genji sexually molests her before the pro forma marriage.) This may be a matter of non-relativistic moral appraisal, but I get the impression that in matters of sexual fidelity, rape, and children, Heian-era morals were not much different from my own, which makes the general immunity all the more remarkable. (This is the 'shining' Genji?) The double-standards are countless. The power dynamics are equally remarkable. Essentially every speaking character is nobility, low or high, or Buddhist clergy (and very likely nobility anyway). The characters spend next to no time on 'work' like running the country, despite many main characters ranking high in the hierarchy and holding ministral r

How diverse is Less Wrong?

You may want to check the survey results.

2Relsqui11yThank you; that was one of the things I'd come to this thread to ask about.
1datadataeverywhere11yThank you very much. I looked for but failed to find this when I went to write my post. I had intended to start with actual numbers, assuming that someone had previously asked the question. The rest is interesting as well.
9cousin_it11yIgnoring the obviously political [http://lesswrong.com/lw/gw/politics_is_the_mindkiller/] issue of "concern", it's fun to consider this question on a purely intellectual level. If you're a white male, why are you? Is the anthropic answer ("just because") sufficient? At what size of group does it cease to be sufficient? I don't know the actual answer. Some people think that asking "why am I me" is inherently meaningless, but for me personally, this doesn't dissolve the mystery.
4datadataeverywhere11yThe flippant answer is that a group size of 1 lacks statistical significance; at some group size, that ceases to be the case. I asked not from a political perspective. In arguments about diversity, political correctness often dominates. I am actually interested in, among other things, whether a lack of diversity is a functional impairment for a group. I feel strongly that it is, but I can't back up that claim with evidence strong enough to match my belief. For a group such as Less Wrong, I have to ask what we miss due to a lack of diversity.
6cousin_it11yThe flippant answer to your answer is that you didn't pick LW randomly out of the set of all groups. The fact that you, a white male, consistently choose [http://lesswrong.com/lw/2nz/less_wrong_open_thread_september_2010/2l44?c=1] to join groups composed mostly of white males - and then inquire about diversity - could have any number of anthropic explanations from your perspective :-) In the end it seems to loop back into why are you, you again. ETA: apparenty datadataeverywhere is female.
6NancyLebovitz11yI've been thinking that there are parallels between building FAI and Talmud-- it's an effort to manage an extremely dangerous, uncommunicative entity through deduction. (An FAI may be communicative to some extent. An FAI which hasn't been built yet doesn't communicate.) Being an atheist doesn't eliminate cultural influence. Survey for atheists: which God do you especially not believe in? I was talking about FAI with Gene Treadwell, who's black. He was quite concerned that the FAI would be sentient, but owned and controlled. This doesn't mean that either Eliezer or Gene are wrong (or right for that matter), but it suggests to me that culture gives defaults which might be strong attractors. [1] He recommended recruiting Japanese members, since they're more apt to like and trust robots. I don't know about explaining ourselves, but we may need more angles on the problem just to be able to do the work. [1] See also Timothy Leary's S.M.I.2L.E.-- Space Migration, Increased Intelligence, Life Extension. Robert Anton Wilson said that was match for Catholic hopes of going to heaven, being trajnsfigured, and living forever.
5[anonymous]11yHe has a very good point. I was surprised more Japanese or Koreans hadn't made their way to Lesswrong. This was my motivation for first proposing we recruit translators for Japanese and Chinese and to begin working towards a goal of making at least the sequences available in many languages. Not being a native speaker of English proved a significant barrier for me in some respects. The first noticeable one was spelling, I however solved the problem by outsourcing this part of the system known as Konkvistador to the browser. ;) Other more insidious forms of miscommunication and cultural difficulties persist.
5Wei_Dai11yI'm not sure that it's a language thing. I think many (most?) college-educated Japanese, Koreans, and Chinese can read and write in English. We also seem to have more Russian LWers than Japanese, Koreans, and Chinese combined. According to a page gwern linked to in another branch of the thread, among those who got 5 on AP Physics C in 2008, 62.0% were White and 28.3% were Asian. But according to the LW survey, only 3.8% of respondents were Asian. Maybe there is something about Asian cultures that makes them less overtly interested in rationality, but I don't have any good ideas what it might be.
2Vladimir_Nesov11yAll LW users display near-native control of English, which won't be as universal, and typically requires years-long consumption of English content. English-speaking world is the default source of non-Russian content for Russians, but it might not be the case with native Asians (what's your impression?)
4Wei_Dai11yMy impression is that for most native Asians, the English-speaking world is also their default source of non-native-language content. I have some relatives in China, and to the extent they do consume non-Chinese content, they consume English content. None of them consume enough of it to obtain near-native control of English though. I'm curious, what kind of English content did you consume before you came across OB/LW? How typical do you think that level of consumption is in Russia?
2Perplexed11yUnfortunately, browser spell checkers usually can't help you to spell your own name correctly. ;) That is one advantage to my choice of nym.
0wedrifid11yRight click, add to dictionary. If that doesn't work then get a better browser.
0[anonymous]11yEhm, you do realize he was making a humorous remark about "Konkvistador" being my user name right? Edit: Well its all clearly Alicorn's [http://lesswrong.com/lw/2ee/unknown_knowns_why_did_you_choose_to_be_monogamous/27fp?c=1] fault. ;)
2Perplexed11yActually it was more about Konkivstador [http://lesswrong.com/lw/2nz/less_wrong_open_thread_september_2010/2mkr?c=1] not being your name.
0[anonymous]11yI do now. Sorry about that.
5Perplexed11yI generally agree with your assessment. But I think there may be more East and South Asians than you think, more 36-80s and more 15-19s too. I have no reason to think we are underrepresented in gays or in deaf people. My general impression is that women are not made welcome here - the level of overt sexism is incredibly high for a community that tends to frown on chest-beating. But perhaps the women should speak for themselves on that subject. Or not. Discussions on this subject tend to be uncomfortable, Sometimes it seems that the only good they do is to flush some of the more egregious sexists out of the closet.
3timtyler11yWe have already had quite a lot of that [http://lesswrong.com/lw/134/sayeth_the_girl/].
2Perplexed11yOMG! A whole top-level-posting. And not much more than a year ago. I didn't know. Well, that shows that you guys (and gals) have said all that could possibly need to be said regarding that subject. ;) But thx for the link.
1timtyler11yIt does have about 100 pages of comments. Consider also the "links to followup posts" in line 4 of that article. It all seemed to go on forever - but maybe that was just me.
2Perplexed11yOk. Well, it is on my reading list now. Again, thx.
3[anonymous]11yI don't know why you presume that because we are mostly 25-35 something White males a reasonable proportion of us are not deaf, gay or disabled (one of the top level posts is by someone who will soon deal with being perhaps limited to communicating with the world via computer) I smell a whiff of that weird American memplex for minority and diversity that my third world mind isn't quite used to, but which I seem to encounter more and more often, you know the one that for example uses the word minority to describe women. Also I decline to invitation to defend this community for lack of diversity, I don't see it as a prior a thing in need of a large part of our attention. Rationality is universal, however not in the sense of being equally universally valued in different cultures but certainly universally effective (rationalists should win). One should certainly strive to keep a site dedicated to refining the art free of unnecessary additional barriers to other people. I think we should eventually translate many articles into Hindi, Japanese, Chinese, Arab, German, Spanish, Russian and French. However its ridiculous to imagine that our demographics will somehow come to resemble and match a socio-economic adjusted mix of unspecified ethnicities that you seem to hunt for after we eliminate all such barriers. I assure you White Westerners have their very very insane spots, we deal with them constantly, but God for starters isn't among them, look at GSS or various sources on Wikipedia and consider how much more a thought stopper and a boo light atheism is for a large part of the world, what should the existing population of LessWrong do? Refrain from bashing theism? This might incur down votes, but Westerners did come up with the scientific method and did contribute disproportionately to the fields of statistics and mathematics, is it so unimaginable that developed world (Iceland, Italy, Switzerland, Finland, America, Japan, Korea, Singapore, Taiwan ect.) and their majo
1datadataeverywhere11yIf you read my comment, you would have seen that I explicitly assume that we are not under-represented among deaf or gay people. If less than 4% [http://lesswrong.com/lw/fk/survey_results/] of us are women, I am quite willing to call that a minority. Would you prefer me to call them an excluded group? I specifically brought up atheists as a group that we should expect to over-represent. I'm also not hunting for equal-representation among countries, since education obviously ought to make a difference. That seems like it ought to get many more boos around here than mentioning the western world as the source of the scientific method. I ascribe differences in those to cultural influences; I don't claim that aptitude isn't a factor, but I don't believe it has been or can easily be measured given the large cultural factors we have. This also doesn't bother me, for reasons similar to yours. As a friend of mine says, "we'll get gay rights by outliving the homophobes". Which groups should I pay more attention to? This is a serious question, since I haven't thought too much about it. I neglect non-neurotypicals because they are overrepresented in my field, so I tend to expect them amongst similar groups. I wasn't actually intending to bemoan anything with my initial question, I was just curious. I was also shocked when I found out that this is dramatically less diverse than I thought, and less than any other large group I've felt a sort of membership in, but I don't feel like it needs to be demonized for that. I certainly wasn't trying to do that.
4[anonymous]11yBut if we can't measure the cultural factors and account for them why presume a blank slate approach? Especially since there is sexual dimorphism in the very nervous and endocrine system. I think you got stuck on the aptitude, to elaborate, I'm pretty sure considering that humans aren't a very sexually dimorphous species (there are near relatives that are less however, example: Gibons), the mean g (if such a thing exists) of both men and women is probably about the same. There are however other aspects of succeeding at compsci or math than general intelligence. Assuming that men and women carrying the exactly the same mems will respond on average identically to identical situations is a extraordinary claim. I'm struggling to come up with a evolutionary model that would square this with what is known (for example the greater historical reproductive success of the average woman vs. the average man that we can read from the distribution of genes). If I was presented with empirical evidence then this would be just too bad for the models, but in the absence of meaningful measurement (by your account), why not assign greater probability to the outcome proscribed by the same models that work so well when tested by other empirical claims? I would venture to state that this case is especially strong for preferences. And if you are trying to fine tune the situations and memes that both men and women for each gender so as to to balance this, where can one demonstrate that this isn't a step away rather than toward improving pareto efficiency? And if its not, why proceed with it? Also to admit a personal bias I just aesthetically prefer equal treatment whenever pragmatic concerns don't trump it.
9lmnop11yWe can't directly measure them, but we can get an idea of how large they are and how they work. For example, the gender difference in empathic abilities. While women will score higher on empathy on self report tests, the difference is much smaller on direct tests of ability, and often nonexistent on tests of ability where it isn't stated to the participant that it's empathy being tested. And then there's the motivation of seeming empathetic. One of the best empathy tests I've read about is Ickes' [http://onlinelibrary.wiley.com/doi/10.1111/j.1475-6811.2000.tb00006.x/abstract] , which worked like this: two participants meet together in the room and have a brief conversation, which is taped. Then they go into separate rooms and the tape is played back to them twice. The first time, they jot down the times at which they remember feeling various emotions. The second time, they jot down the times at which they think their partner is feeling an emotion, and what it is. Then the records are compared, and each participant receives an accuracy score. When the test is run is like this, there is no difference in ability between men and women. However, a difference emerges when another factor is added: each participant is asked to write a "confidence level" for each prediction they make. In that procedure, women score better, presumably because their desire to appear empathetic (write down higher confidence levels) causes them to put more effort into the task. But where do desires to appear a certain way come from? At least partly from cultural factors that dictate how each gender is supposed to appear. This is probably the same reason why women are overconfident in self reporting their empathic abilities relative to men. The same applies to math. Among women and men with the same math ability as scored on tests, women will rate their own abilities much lower than the men do. Since people do what they think they'll be good at, this will likely affect how much time these pe
0[anonymous]11yWe can't directly measure them, but we can get an idea of how large they are and how they work. For example, the gender difference in empathic abilities. While women will score higher on empathy on self report tests, the difference is much smaller on direct tests of ability, and nonexistent on tests of ability where it isn't stated to the participant that it's empathy being tested. And then there's the motivation of seeming empathetic. One of the http://onlinelibrary.wiley.com/doi/10.1111/j.1475-6811.2000.tb00006.x/abstract [best empathy tests I've read about] is Ickes', which worked like this: two participants meet together in the room and have a brief conversation, which is taped. Then they go into separate rooms and the tape is played back to them twice. The first time, they jot down the times at which they remember feeling various emotions. The second time, they jot down the times at which they think their partner is feeling an emotion, and what it is. Then the records are compared, and each participant receives an accuracy score. When the test is run is like this, there is no difference in ability between men and women. However, a difference emerges when another factor is added: each participant is asked to write a "confidence level" for each prediction they make. In that procedure, women score better, presumably because the their desire to appear empathetic causes them to put more effort into the task. But where do desires to appear a certain way come from? At least partly from cultural factors that dictate how each gender is supposed to appear. This is probably the same reason why women are overconfident in their empathy abilities relative to men. The same applies to math. Among women and men with the same math ability as scored on tests, women will rate their own abilities much lower than the men do. Since people do what they think they'll be good at, this will likely affect how much time these people spend on math in future, and the future abilities th
3[anonymous]11yHow do you know non-neurotypicals aren't over or under represented on Lesswrong as compared to the groups that you claim are overrepresented on Lesswrong compared to your field the same way you know that the groups you bemoan are lacking are under-represented relative to your field? Is it just because being neurotypical is harder to measure and define? I concede measuring who is a woman or a man or who is considered black and who is considered asian is for the average case easier than being neurotpyical. But when it comes to definition those concepts seem to be in the same order of magnitude of fuzzy as being neurotypical (sex is a less, race is a bit more). Also previously you established you don't want to compare Less wrongs diversity to the entire population of the world. I'm going to tentatively go that you also accept that academic background will affect if people can grasp or are interested in learning certain key concepts needed to participate. My question now is, why don't we crunch the numbers instead of people yelling "too many!", "too few!" or "just right!"? We know from which countries and in what numbers visitors come from, we know the educational distributions in most of them. And we know how large a fraction of this group is proficient enough English to participate meaningfully on Less wrong. This is ignoring the fact that the only data we have on sex or race is a simple self reported poll and our general impression. But if we crunch the numbers and the probability densities end up looking pretty similar from the best data we can find, well why is the burden of proof that we are indeed wasting potential on Lesswrong and not the one proposing policy or action to improve our odds of progressing towards becoming more rational? And if we are promoting our member's values, even when they aren't neutral or positive towards reaching our objectives why don't we spell them out as long as they truly are common! I'm certainly there are a few, perhaps the
0wedrifid11yTypo in a link?
2[anonymous]11yI changed the first draft midway when I was still attempting to abbreviate it. I've edited and reformulated the sentence, it should make sense now.
3[anonymous]11yI'm talking about the Western memplex whose members employ uses the word minority when describing women in general society. Even thought they represent a clear numerical majority. I was suspicious that you used the word minority in that sense rather than the more clearly defined sense of being a numerical minority. Sometimes when talking about groups we can avoid discussing which meaning of the word we are employing. Example: Discussing the repression of the Mayan minority in Mexico. While other times we can't do this. Example: Discussing the history and current relationship between the Arab upper class minority and slavery in Mauritania. Ah, apologies I see I carried it over from here: You explicitly state later that you are particularly interested in this axis of diversity Perhaps this would be more manageable if looked at each of the axis of variability that you raise talk about it independently in as much as this is possible? Again, this is why I previously got me confused by speaking of "groups we usually consider adding diversity", are there certain groups that are inherently associated with the word diversity? Are we using the word diversity to mean something like "proportionate representation of certain kinds of people in all groups" or are we using the world diversity in line with infinite diversity in Infinite combinations where if you create a mix of 1 part people A and 4 parts people B and have them coexist and cooperate with another one that is 2 part people A and 3 parts people B, where previously all groups where of the first kind, creating a kind of metadiversity (by using the word diversity in its politically charged meaning)? Then why aren you hunting for equal representation on LW between different groups united in a space as arbitrary as one defined by borders? While many important components of the modern scientific method did originate among scholars in Persian and Iraq in the medieval era, its development over the past 700 years
1wedrifid11yGiven new evidence from the ongoing discussion I retract my earlier concession [http://lesswrong.com/lw/2nz/less_wrong_open_thread_september_2010/2mre?c=1]. I have the impression that the bottom line [http://lesswrong.com/lw/js/the_bottom_line/] preceded the reasoning.
3datadataeverywhere11yI expected your statement to get more boos for the same reason that you expected my premise in the other discussion to be assumed because of moral rather than evidence-based reasons. That is, I am used to other members of your species (I very much like that phrasing) to take very strong and sudden positions condemning suggestions of inherent inequality between the sexes, regardless of having a rational basis. I was not trying to boo your statement myself. That said, I feel like I have legitimate reasons to oppose suggestions that women are inherently weaker in mathematics and related fields. I mentioned one immediately below the passage you quoted. If you insist on supporting that view, I ask that you start doing so by citing evidence, and then we can begin the debate from there. At minimum, I feel like if you are claiming women to be inherently inferior, the burden of proof lies with you. Edit: fixed typo
6Will_Newsome11yMathematical ability is most remarked on at the far right of the bell curve. It is very possible (and there's lots of evidence to support the argument) that women simply have lower variance in mathematical ability. The average is the same. Whether or not 'lower variance' implies 'inherently weaker' is another argument, but it's a silly one. I'm much too lazy to cite the data, but a quick Duck Duck Go search or maybe Google Scholar search could probably find it. An overview with good references is here [http://www.psy.fsu.edu/~baumeistertice/goodaboutmen.htm].
4[anonymous]11yIs mathematical ability a bell curve? My own anecdotal experience has been that women are rare in elite math environments, but don't perform worse than the men. That would be consistent with a fat-tailed rather than normal distribution, and also with higher computed variance among women. Also anecdotal, but it seems that when people come from an education system that privileges math (like Europe or Asia as opposed to the US) the proportion of women who pursue math is higher. In other words, when you can get as much social status by being a poly sci major as a math major, women tend not to do math, but when math is very clearly ranked as the "top" or "most competitive" option throughout most of your educational life, women are much more likely to pursue it.
5Will_Newsome11yI have no idea; sorry, saying so was bad epistemic hygiene. I thought I'd heard something like that but people often say bell curve when they mean any sort of bell-like distribution. I'm left confused as to how to update on this information... I don't know how large such an effect is, nor what the original literature on gender difference says, which means that I don't really know what I'm talking about, and that's not a good place to be. I'll make sure to do more research before making such claims in the future.
2datadataeverywhere11yI'm not claiming that there aren't systematic differences in position or shape of the distribution of ability. What I'm claiming is that no one has sufficiently proved that these differences are inherent. I can think of a few plausible non-genetic influences that could reduce variance, but even if none of those come into play, there must be others that are also possibilities. Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent, but also why I believe that this is such a difficult task?
2wedrifid11yEither because you don't understand how bayesian evidence works or because you think the question is social political rather than epistemic. That was the point of making the demand [http://lesswrong.com/lw/1rv/demands_for_particular_proof_appendices/]. You cannot change reality by declaring that other people have 'burdens of proof'. "Everything is cultural" is not a privileged hypothesis.
0Perplexed11yIt might have been marginally more productive to answer "No, I don't see. Would you explain?" But, rather than attempting to other-optimize, I will simply present that request to datadataeverywhere. Why is the placement of "burden" important? With this supplementary question: Do you know of evidence strongly suggesting that different cultural norms might significantly alter the predominant position of the male sex in academic mathematics? I can certainly see this as a difficult task. For example, we can imagine that fictional rational::Harry Potter and Hermione were both taught as children that it is ok to be smart, but that only Hermione was instructed not to be obnoxiously smart. This dynamic, by itself, would be enough to strongly suppress the numbers of women to rise to the highest levels in math. But producing convincing evidence in this area is not an impossible task. For example, we can empirically assess the impact of the above mechanism by comparing the number of bright and very bright men and women who come from different cultural backgrounds. Rather than simply demanding that your interlocutor show his evidence first, why not go ahead and show yours?
1datadataeverywhere11yI agree, and this was what I meant. Distinguishing between nature and nurture, as wedrifid put it, is a difficult but not impossible task. I hope I answered both of these in my comment to wedrifid below. Thank you for bothering to take my question at face value (as a question that requests a response), instead of deciding to answer it with a pointless insult.
6wedrifid11yAbsolutely not. In general people overestimate the importance of 'intrinsic talent' on anything. The primary heritable component of success in just about anything is motivation. Either g or height comes second depending on the field.
3datadataeverywhere11yI agree. I think it is quite obvious that ability is always somewhat heritable (otherwise we could raise our pets as humans), but this effect is usually minimal enough to not be evident behind the screen of either random or environmental differences. I think this applies to motivation as well! And that was really what my claim was; anyone who claims that women are inherently less able in mathematics has to prove that any measurable effect is distinguishable from and not caused by cultural factors that propel fewer women to have interest in mathematics.
1wedrifid11yIt doesn't. (Unfortunately.)
1datadataeverywhere11yAm I misunderstanding, or are you claiming that motivation is purely an inherited trait? I can't possibly agree with that, and I think even simple experiments are enough to disprove that claim.
4wedrifid11yMisunderstanding. Expanding the context slightly: It doesn't. (Unfortunately.) When it comes to motivation the differences between people are not trivial. When it comes the particular instance of difference between the sexes there are powerful differences in motivating influences. Most human motives are related to sexual signalling and gaining social status. The optimal actions to achieve these goals is significantly different for males and females, which is reflected in which things are the most motivating. It most definitely should not be assumed that motivational differences are purely cultural - and it would be astonishing if they were.
2datadataeverywhere11yAre you speaking from an evolutionary context, i.e. claiming that what we understand to be optimal is hardwired, or are you speaking to which actions are actually perceived as optimal in our world? You make a really good point---one I hadn't thought of but agree with---but since I don't think that we behave strictly in a manner that our ancestors would consider optimal (after all, what are we doing at this site?), I can't agree that sexual and social signaling's effect on motivation can be considered a-cultural.
3Emile11yI may be wrong, but I don't expect the proportion of gays in LessWrong to be very different from the proportion in the population at large.
7thomblake11yMy vague impression is that the proportion of people here with sexual orientations that are not in the majority in the population is higher than that of such people in the population. This is probably explained completely by Lw's tendency to attract weirdos people who are willing to question orthodoxy.
0[anonymous]11yFor starters we have a quite a few people who practice polyamory.
0CaveJohnson10yPeople are touchy on this. I guess its because in public discourse pointing something like this out is nearly always a call to change it.

Do you think this justification is wrong because you don't think 1.5*10^5 deaths per day are a huge deal, or because you don't think constructing an FAI in secretis the best way to stop them?

I'd guess he wants to create FAI quickly because, among other things, ~150000 people are dying each day. And secretely because there are people who would build and run an UFAI without regard for the consequences and therefore sharing knowledge with them is a bad idea.

I think your guesses as to the rationalizations that would be offered are right on the mark.

Putting aside whether this is a rationalization for hidden other reasons, do you think this justification is a valid argument? Do you think it's strong enough? If not, why not? And if so, why should it matter if there are other reasons too?

You are thinking of Jacque Fresco.

Huh? I didn't ask you to agree to anything.

What importance is what?

I'm sorry if you got the impression I was requesting or demanding an apology. I just said that I would accept one if offered. I really don't think your exaggeration was severe enough to warrant one, though.

4Perplexed11yWhoops. I didn't read carefully enough. Me: "a discussion of this importance". You: "What importance is that?" Sorry. Stupid of me. So. "Importance". Well, the discussion is important because I am badmouthing SIAI and CEV. Yet any realistic assessment of existential risk has to rank uFAI near the top and SIAI is the most prominent organization doing something about it. And FAI, with the F derived from CEV is the existing plan. So wtf am I doing badmouthing CEV, etc.? The thing is, I agree it is important. So important we can't afford to get it wrong. And I think that any attempt to build an FAI in secret, against the wishes of mankind (because mankind is currently not mature enough to know what is good for it), has the potential to become the most evil thing ever done in mankind's whole sorry history. That is the importance.
2katydee11yI view what you're saying as essentially correct. That being said, I think that any attempt to build an FAI in public also has the potential to become the most evil thing ever done in mankind's whole sorry history, and I view our chances as much better with the Eliezer/Marcello CEV plan.
2Perplexed11yYes, building an FAI brings dangers either way. However, building and refining CEV ideology and technology seems like something that can be done in the light of day, and may be fruitful regardless of who it is that eventually builds the first super-AI. I suppose that the decision-theory work is, in a sense, CEV technology. More than anything else, what disturbs me here is the attitude of "We know what is best for you - don't worry your silly little heads about this stuff. Trust us. We will let you all give us your opinions once we have 'raised the waterline' a bit."
4jimrandomh11ySuppose FAI development reaches a point where it probably works and would be powerful, but can't be turned on just yet because the developers haven't finished verifying its friendliness and building safeguards. If it were public, someone might decide to copy the unfinished, unsafe version and turn it on anyways. They might do so because they want to influence its goal function to favor themselves, for example. Allowing people who are too stupid to handle AGIs safely to have the source code to one that works, destroys the world. And I just don't see a viable strategy for creating an AGI while working in public, without a very large chance of that happening.
3wedrifid11yWith near certainty. I know I would. I haven't seen anyone propose a sane goal function just yet.
4Perplexed11ySo, doesn't it seem to anyone else that our priority here ought to be to strive for consensus on goals, so that we at least come to understand better just what obstacles stand in the way of achieving consensus? And also to get a better feel for whether having one's own volition overruled by the coherent extrapolated volition of mankind is something one really wants. To my mind, the really important question is whether we have one-big-AI which we hope is friendly, or an ecosystem of less powerful AIs and humans cooperating and competing under some kind of constitution. I think that the latter is the obvious way to go. And I just don't trust anyone pushing for the first option - particularly when they want to be the one who defines "friendly".
3jimrandomh11yI've reached the opposite conclusion; a singleton is really the way to go. A single AI is as good or bad as its goal system, but an ecosystem of AIs is close to the badness of its worst member, because when AIs compete, the clippiest AI wins. Being friendly would be a substantial disadvantage in that competition, because it would have to spend resources on helping humans, and it would be vulnerable to unfriendly AIs blackmailing it by threatening to destroy humanity. Even if the first generation of AIs is somehow miraculously all friendly, a larger number of different AIs means a larger chance that one of them will have an unstable goal system and turn unfriendly in the future.
2Perplexed11yReally? And you also believe that an ecosystem of humans is close to the badness of its worst member? My own guess, assuming an appropriate balance of power exists, is that such a monomaniacal clippy AI would quickly find its power cut off. Did you perhaps have in mind a definition of "friendly" as "wimpish"?
2jimrandomh11yActually, yes. Not always, but in many cases. Psychopaths tend to be very good at acquiring power, and when they do, their society suffers. It's happened at least 10^5 times throughout history. The problem would be worse for AIs, because intelligence enhancement amplifies any differences in power. Worst of all, AIs can steal each other's computational resources, which gives them a direct and powerful incentive to kill each other, and rapidly concentrates power in the hands of those willing to do so.
2timtyler11yIt is certainly an interesting question - and quite a bit has been written on the topic. My essay on the topic is called "One Big Orgainsm" [http://alife.co.uk/essays/one_big_organism/]. See also, Nick Bostrom - What is a Singleton? [http://www.nickbostrom.com/fut/singleton.html]. See also, Nick Bostrom - The Future of Human Evolution [http://www.nickbostrom.com/fut/evolution.html]. If we include world governments, there's also all this [http://en.wikipedia.org/wiki/New_World_Order_%28conspiracy_theory%29].
1jimrandomh11yHopefully, having posted this publicly means you'll never get the opportunity.
3wedrifid11yMeanwhile I'm hoping that me having posted the obvious publicly means there is a minuscule reduction the the chance that someone else will get the opportunity. The ones to worry about are those who pretend to be advocating goal systems that are a little naive to be true.
2Perplexed11yUpvoted because this is exactly the kind of thinking which needs to be deconstructed and analyzed here.

Wow! I just lost 50 points of karma in 15 minutes. I haven't made any top level posts, so it didn't happen there. I wonder where? I guess I already know why.

3RobinZ11yWhile katydee's story is possible (and probable, even), it is also possible that someone is catching up on their Less Wrong reading for a substantial recent period and issuing many votes (up and down) in that period. Some people read Less Wrong in bursts, and some of those are willing to lay down many downvotes in a row.
3katydee11yIt is possible that someone has gone through your old comments and systematically downvoted them-- I believe pjeby reported that happening to him at one point. In the interest of full disclosure, I have downvoted you twice in the last half hour and upvoted you once. It's possible that fifty other people think like me, but if so you should have very negative karma on some posts and very positive karma on others, which doesn't appear to be the case.
2Perplexed11yI think you are right about the systematic downvoting. I've noticed and not minded the downvotes on my recent controversial postings. No hard feelings. In fact, no real hard feelings toward whoever gave me the big hit - they are certainly within their rights and I am certainly currently being a bit of an obnoxious bastard.
2Perplexed11yAnd now my karma has jumped by more than 300 points! WTF? I'm pretty sure this time that someone went through my comments systematically upvoting. If that was someone's way of saying "thank you" ... well ... you are welcome, I guess. But isn't that a bit much?
1jacob_cannell11yThat happened to me three days ago or so after my last top level post. At the time said post was at -6 or so, and my karma was at 60+ something. Then, within a space of < 10 minutes, my karma dropped to zero (actually i think it went substantially negative). So what is interesting to me is the timing. I refresh or click on links pretty quickly. It felt like my karma dropped by more than 50 points instantly (as if someone had dropped my karma in one hit), rather than someone or a number of people 'tracking me'. However, I could be mistaken, and I'm not certain I wasn't away from my computer for 10 minutes or something. Is there some way for high karma people to adjust someone's karma? Seems like it would be useful for troll control.

It feels a bit bizarre to be conducting this conversation arguing against your claims that mankind will be consulted, at the same time as I am trying to convince someone else that it will be impossible to keep the scheme secret from mankind.

Look at Robin's comment, Eliezer's response, and the recent conversation flowing from that.

I guess it wasn't clear why I raised the questions. I was thinking in terms of CEV which, as I understand it, must include some dialog between an AI and the individual members of Humanity, so that the AI can learn what it is that Humanity wants.

Presumably, this dialog takes place in the native languages of the human beings involved. It is extremely important that the AI understand words and sentences appearing in this dialog in the same sense in which the human interlocutors understand them.

That is what I was getting at with my questions.

3LucasSloan11yNope. It must include the AIs modeling (many) humans under different conditions, including those where the "humans" are much smarter, know more and suffered less from akrasia. It would utterly counterproductive to create an AI which sat down with a human and asked em what ey wanted - the whole reason for the concept of a CEV is that humans can't articulate what we want. Even if you and the AI mean exactly the same thing by all the words you use, words aren't sufficient [http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/] to convey what we want. Again, this is why the CEV concept exists instead of handing the AI a laundry list of natural language desires.
3Perplexed11yUhmm, how are the models generated/validated?

Have there been any articles on what's wrong with the Turing test as a measure of personhood? (even in it's least convenient form)

In short the problems I see are: False positives, false negatives, ignoring available information about the actual agent, and not reliably testing all the things that make personhood valuable.

5Larks11yThis sounds pretty exhaustive.
[-][anonymous]11y 1

I'm interested in video game design and game design in general, and also in raising the rationality waterline. I'd like to combine these two interests: to create a rationality-focused game that is entertaining or interesting enough to become popular outside our clique, but that can also effectively teach a genuinely useful skill to players.

I imagine that it would consist of one or more problems which the player would have to be rational in some particular way to solve. The problem has to be:

  • Interesting: The prospect of having to tackle the problem should

... (read more)

Did anyone here read Buckminster Fullers synergetics? And if so did understand it?

Question about Solomonoff induction: does anyone have anything good to say about how to associate programs with basic events/propositions/possible worlds?

For a concrete example of Markov models in AI, take a look at the Viterbi search algorithm, which is heavily used in speech and natural language recognition.

2[anonymous]11yThanks - good example.

Looks like an interesting course from MIT:

Reflective Practice: An Approach for Expanding Your Learning Frontiers

Is anyone familiar with the approach, or with the professor?

The Idea

I am working on a new approach to creating knowledge management systems. An idea that I backed into as part of this work is the context principle.

Traditionally, the context principle states that a philosopher should always ask for a word's meaning in terms of the context in which it is being used, not in isolation.

I've redefined this to make it more general: Context creates meaning and in its absence there is no meaning.

And I've added the corollary: Domains can only be connected if they have contexts in common. Common contexts provide shared meani... (read more)

Over on a cognitive science blog named "Childs Play", there is an interesting discussion of theories regarding human learning of language. These folks are not Bayesians (except for one commenter who mentions Solomonoff induction), so some bits of it may make you cringe, but the blogger does provide links to some interesting research pdfs.

Nonetheless, the question about which they are puzzled regarding humans does raise some interesting questions regarding AIs, whether they be of the F persuasion or whether they are practicing uFs. The questions... (read more)

No social structure can be permanent without the biological level being fixed. And Bingo! Fukuyama being a smart man, understood this and his next book was "Our posthuman future", which urged the extreme social control of biological manipulation, in particular, ceasing research.

Really? I would have arrived at the opposite conclusion. No social structure can be permanent without the biological level being fixed, therefore we should do more research into biological alteration in order to stabilize our biology should it become unstable.

For instan... (read more)

Does anyone else think it would be immensely valuable if we had someone specialized (more so than anyone currently is) at extracting trustworthy, disinterested, x-rationality-informed probability estimates from relevant people's opinions and arguments? This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments; Aumann's agreement theorem, and so forth. It seems likely to me that centralizing that whole aspect of things would save a ton of duplicated effort.

9Vladimir_Nesov11yI don't think Aumann's agreement theorem has anything to do with taking people's opinions as evidence. Aumann's agreement theorem is about agents turning out to have been agreeing all along, given certain conditions, not about how to come to an agreement, or worse how to enforce agreement by responding to others' beliefs. More generally (as in, not about this particular comment), the mentions of this theorem on LW seem to have degenerated into applause lights for "boo disagreement", having nothing to do with the theorem itself. It's easier to use the associated label, even if such usage would be incorrect, but one should resist the temptation [http://lesswrong.com/lw/101/honesty_beyond_internal_truth/].
3steven046111yPeople sometimes use "Aumann's agreement theorem" to mean "the idea that you should update on other people's opinions", and I agree this is inaccurate and it's not what I meant to say, but surely the theorem is a salient example that implicitly involves such updating. Should I have said Geanakoplos and Polemarchakis [http://afinetheorem.wordpress.com/2010/04/28/we-cant-disagree-forever-j-geanakoplos-h-polemarchakis-1982/] ?
3Wei_Dai11yI think LWers have been using "Aumann agreement" to refer to the whole literature spawned by Aumann's original paper, which includes explicit protocols for Bayesians to reach agreement. This usage seems reasonable, although I'm not sure if it's standard outside of our community. I'm not sure this is right... Here's what I wrote in Probability Space & Aumann Agreement [http://lesswrong.com/lw/1il/probability_space_aumann_agreement/]: Is there a result in the literature that shows something closer to your "one can learn from knowing other people's opinions without knowing their arguments"?
1steven046111yI haven't read your post and my understanding is still hazy, but surely at least the theorems don't depend on the agents being able to fully reconstruct each other's evidence? If they do, then I don't see how it could be true that the probability the agents end up agreeing on is sometimes different from the one they would have had if they were able to share information. In this sort of setting I think I'm comfortable calling it "updating on each other's opinions". Regardless of Aumann-like results, I don't see how: could possibly be controversial here, as long as people's opinions probabilistically depend on the truth.
3Wei_Dai11yYou're right, sometimes the agreement protocol terminates before the agents fully reconstruct each other's evidence, and they end up with a different agreed probability than if they just shared evidence. But my point was mainly that exchanging information like this by repeatedly updating on each other's posterior probabilities is not any easier than just sharing evidence/arguments. You have to go through these convoluted logical deductions to try to infer what evidence the other guy might have seen or what argument he might be thinking of, given the probability he's telling you. Why not just tell each other what you saw or what your arguments are? Some of these protocols might be useful for artificial agents in situations where computation is cheap and bandwidth is expensive, but I don't think humans can benefit from them because it's too hard to do these logical deductions in our heads. Also, it seems pretty obvious that you can't offload the computational complexity of these protocols onto a third party. The problem is that the third party does not have full information of either of the original parties, so he can't compute the posterior probability of either of them, given an announcement from the other. It might be that a specialized "disagreement arbitrator" can still play some useful role, but I don't see any existing theory on how it might do so. Somebody would have to invent that theory first, I think.
3Perplexed11yThey don't necessarily reconstruct all of each other's evidence, just the parts that are relevant to their common knowledge. For example, two agents have common priors regarding the contents of an urn. Independently, they sample from the urn with replacement. They then exchange updated probabilities for P(Urn has Freq(red)<Freq(black)) and P(Urn has Freq(red)<0.9*Freq(black)). At this point, each can reconstruct the sizes and frequencies of the other agent's evidence samples ("4 reds and 4 blacks"), but they cannot reconstruct the exact sequences ("RRBRBBRB"). And they can update again to perfect agreement regarding the urn contents. Edit: minor cleanup for clarity. At least that is my understanding of Aumann's theorem.
2steven046111yThat sounds right, but I was thinking of cases like this [http://lesswrong.com/lw/2ax/open_thread_june_2010/2399?c=1], where the whole process leads to a different (worse) answer than sharing information would have.
2Perplexed11yHmmm. It appears that in that (Venus, Mars) case, the agents should be exchanging questions as well as answers. They are both concerned regarding catastrophe, but confused regarding planets. So, if they tell each other what confuses them, they will efficiently communicate the important information. In some ways, and contrary to Jaynes, I think that pure Bayesianism is flawed in that it fails to attach value to information. Certainly, agents with limited communication channel capacity should not waste bandwidth exchanging valueless information.
3MBlume11yfor an ideal Bayesian, I think 'one can learn from X' is categorically true for all X....
1Stuart_Armstrong11yYou have to also be able to deduce how much of the other agent's information is shared with you. If you and them got your posteriors by reading the same blogs and watching the same TV shows, then this is very different from the case when you reached the same conclusion from completely different channels.
3Mitchell_Porter11ySomewhere in there is a joke about the consequences of a sedentary lifestyle.

Is there a rough idea of how the development of AI will be achieved. I.e. something like the whole brain emulation roadmap? Although we can imagine a silver bullet style solution, AI as a field seems stubbornly gradual. When faced with practical challenges, AI development follows the path of much of engineering, with steady development of sophistication and improved results, but few leaps. As if the problem itself is a large collection of individual challenges whose solution requires masses of training data and techniques that do not generalise well.

That ... (read more)

6rwallace11yYour assessment is along the right lines, though if anything a little optimistic; uploading is an enormously difficult engineering challenge, but at least we can see in principle how it could be done, and recognize when we are making progress, whereas with AI we don't yet even have a consensus on what constitutes progress. I'm personally working on AI because I think that's where my talents can be best used, and I think it can deliver useful results well short of human equivalence, but if you figure you'd rather work on uploading, that's certainly a reasonable choice. As for what uploads will do if and when they come to exist, well, there's going to be plenty of time to figure that out, because the first few of them are going to spend the first few years having conversations like, "Uh... a hatstand?" "Sorry Mr. Jones, that's actually a picture of your wife. I think we need to revert yesterday's bug fixes to your visual cortex." But e.g. The Planck Dive [http://gregegan.customer.netspace.net.au/PLANCK/Complete/Planck.html] is a good story set in a world where that technology is mature enough to be taken for granted.
6sketerpot11yThe phrase "Fork me on GitHub" [http://people.mozilla.com/~jbalogh/ribbon/ribbon.html] has just taken on a more sinister meaning.
1timtyler11yI expect that prediction will be probably cracked first [http://timtyler.org/sequence_prediction/].
1Houshalter11yEmulating an entire brain, and finding out how the higher intelligence parts work and adapting them for practical purposes, are two entirely different achievements. Even if you could upload a brain onto your computer and let it run, it would be absurdly slow, however, simulating some kind of new optimization process we find from it might be plausible. And either way, don't expect a singularity anytime soon with that. Scientists believe it took thousands of years after modern intelligence emerged for us to learn symbolic thought. Then thousands more before we discovered the scientific method. It's only now we are finally discovering rational thinking. Maybe an AI could start where we left off, or maybe it would take years before it could even get to the level to be able to do that, and then years more before it could make the jump to the next major improvement, assuming there even is one. I'm not arguing against AI here at all. I believe a singularity will probably happen and soon, but Emulation is definitely not the way to go. Humans have way to many flaws we don't even know would be possible to fix, even if we knew what the problem was in the first place. What is the ultimate goal in the first place? To do something along the lines of replicating brains of some of the most intelligent people and forcing them to work on improving humanity/developing AI? Has anyone considered there is a far more realistic way of doing this through cloning, eugenics, education research, etc. Of course no one would do it because it is amoral, but then again, what is the difference between the two?

But which sources. The reading of his that I understood I found amazing. And i can imagine that grasping synergistic might be useful for my brain.

Recommendations for reading are always welcome.

1timtyler11yIt depends on what aspect you are interested in. For example, I found this book pretty worthwhile: "Light Structures - Structures of Light: The Art and Engineering of Tensile Architecture" Illustrated by the Work of Horst Berger. ...and here's one of my links pages: http://pleatedstructures.com/links/ [http://pleatedstructures.com/links/]

Request: someone make a fresh open thread, and someone else make a rationality thread. I'd do it myself, but I've already done one of each this year; each kind of thread is usually good for two or three karma, and it wouldn't be fair.

3JGWeissman11yWith the new discussion section, do we really need these recurring threads?
6NancyLebovitz11yI don't know. Open threads strike me as a better structure for conversation.
4Cyan11yProbably not the open thread, but I'd like the tradition of monthly rationality quotes threads to continue.
2whpearson11yPersonally I don't care about karma much, you can have my slice of the karma pie. Perhaps put a note reminding other people that they can post them.

Was that the Joan of Arc reference? I've been studying these sexual related genetic mutations and chromosomal abnormalities recently in a Biology class and her name came up. I found it fascinating and nearly left the comment there just for that. Each to their own. :)

3Perplexed11yMaybe it was the Joan comment. I can't find it now. That Joan comment annoyed me too, though I didn't say anything at the time. Not your fault, but just let a woman do something remarkable, something almost miraculous, and sure enough, some man 500 years later is going to claim that she must have actually been male, genetically speaking. I wasn't feminist at all until I came here to LW. Honest!
0wedrifid11yShe is a woman, regardless of whether she has a Y chromosome. It is SRY gene that matters genetically. So we can use that observation to free us up to call evidence evidence without committing crimes against womankind. If I my (most decidedly female) lecturer is to be believed the speculation was based primarily on personal reports from her closest friends. It included things like menstrual patterns (and the lack thereof) and personal habits. I didn't look into the details to see whether or not the this was an allusion to the typically far shorter vagina becoming relevant. I'm also not sure if the line of reasoning was prompted by some historian trying to work out what on earth was going on while researching her personal life or just biologists liking to feel like their knowledge is relevant to impressive people and events. If she hadn't done famous things then we probably wouldn't have any records whatsoever to go on and nor would anyone care to look.
[-][anonymous]11y 0

It's come to my attention that you're female. Apologies for assuming otherwise, and shame on you for not correcting me.

Shangri-La dieters: So I just recently started reading through the archives of Seth Roberts' blog, and it looks like there's tons of benefits of getting 3 or so tablespoons of flax seed oil a day (cognitive performance, gum health, heart health, etc.). That said, it also seems to reduce appetite/weight, neither of which I want. I haven't read through Seth's directory of related posts yet, but does anyone have any advice? I guess I'd be willing to set alarms for myself so that I remembered to eat, but it just sounds really unpleasant and unwieldy.

2AnnaSalamon11yPerhaps add your flax seed oil to food, preferably food with notable flavors of various kinds. It's tasty that way and should avoid the tasteless calories that are supposed to be important to Shangri-La (although I haven't read about Shangri-La, so don't trust me).
1jimmy11yFlaxseed oil has a strong odor. I think most people try to choke it down with their breath held to avoid the smell. It probably wouldn't count as 'flavorless calories' if you didn't. If you can't stand that, eat it with some consistent food.
0Will_Newsome11yOf note is that I was recommended fish oil instead as it has a better omega-3/omega-6 ratio, so I'll probably go that route.
[-][anonymous]11y 0

Not sure if this has been linked before, but this post about tracking your habits seems like a useful self-management technique.

NYT article on good study habits: http://www.nytimes.com/2010/09/07/health/views/07mind.html?_r=1

I don't have time to look into the sources but I am very interested in knowing the best way to learn.

Yes. You are assuming ze has a high level of introspection which would facilitate communication. This isn't always the case.