Open Thread: July 2010, Part 2

by Alicorn1 min read9th Jul 2010770 comments

11

Open Threads
Personal Blog

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.


July Part 1

Rendering 500/768 comments, sorted by (show more) Highlighting new comments since Today at 6:05 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Can I just state categorically that this "ways to increase existential risks" thing is stupid and completely over the top.

We should be able to discuss things, sometimes even continuing discussion in private. We should not stoop to playing silly games of brinksmanship.

Geez, if we can't eve avoid existential brinksmanship on the goddamn LW forum when the technology is hypothetical and the stakes are low, what hope in hell do we have when real politicians get wind of real progress in AGI?

[-][anonymous]11y 30

This is silly - there's simply no way to assign a probability of his posts increasing the chance of UFAI with any degree of confidence, to the point where I doubt you could even get the sign right.

For example, deleting posts because they might add an infinitesimally small amount to the probability of UFAI being created makes this community look slightly more like a bunch of paranoid nutjobs, which overall will hinder its ability to accomplish its goals and makes UFAI more likely.

From what I understand, the actual banning was due to it's likely negative effects on the community, as Eliezer has seen similar things on the SL4 mailing list - which I won't comment on. But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.

4fortyeridania11yThank you for pointing out the difficulty of quantifying existential risks posed blog posts. The danger from deleting blog posts is much more tangible, and the results of censorship are conspicuous. You have pointed out two such dangers in your comment--(1) LWers will look nuttier and (2) it sets a bad precedent. (Of course, if there is a way to quantify the marginal benefit of an LW post, then there is also a way to quantify the marginal cost from a bad one--just reverse the sign, and you'll be right on average.)
2TheOtherDave11yThat makes sense for evaluating the cost/benefit to me of reading a post. But if I want to evaluate the overall cost/benefit of the post itself, I should also take into account the number of people who read one vs. the other. Given the ostensible purpose of karma and promotion, these ought to be significantly different.
1jimrandomh11yHuh? Why should these be equal? Why should they even be on the same order of magnitude? For example, an advertising spam post that gets deleted does orders of magnitude much less harm than an average good post does good. And a post that contained designs for a UFAI would do orders of magnitude more harm.
2fortyeridania11yYou are right to say that it's possible to have extemely harmful blog posts, and it is also possible to have mostly harmless blog posts. I also agree that the examples you've cited are apt. However, it is also possible to have extremely good blog posts (such as one containing designs for a tool to prevent the rise of UFAI or that changed many powerful people's minds for the better) and to have barely beneficial ones. Do we have a reason to think that the big bads are more likely than big goods? Or that a few really big bads are more likely than many moderate goods? I think that's the kind of reason that would topple what I've said. One of my assumptions here is that whether a post is good or bad does not change the magnitude of its impact. The magnitide of its positivity or negativity might change its the magnitude of its impact, but why should the sign? I'm sorry if I've misunderstood your criticism. If I have, please give me another chance.
2JamesAndrix11yhttp://www.damninteresting.com/this-place-is-not-a-place-of-honor [http://www.damninteresting.com/this-place-is-not-a-place-of-honor] Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won't in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and I think [some but not all others] I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don't know, but there seems to be a decent chance that you'll find out about it soon enough. Don't assume it's Ok because you understand the need for friendliness and aren't writing code. There are no secrets to intelligence in hidden comments. (Though I didn't see the original thread, I think I figured it out and it's not giving me any insights.) Don't feel left out or not smart for not 'getting it' we only 'got it' because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway. Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy. http://www.damninteresting.com/this-place-is-not-a-place-of-honor [http://www.damninteresting.com/this-place-is-not-a-place-of-honor]

No one asked or forced Roko to leave Less Wrong. Wounded by Eliezer's public reprimand, Roko deleted all his comments and said that he is leaving.

(I for one wish he would come back. He was a valuable contributor.)

Correct. I was not asked to leave.

Something really crazy is going on here.

You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, "it could increase the chance of an AI going wrong...".

"I deleted my comment because it was maybe going to increase the chance of an AI going wrong..."

"Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid..."

"Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong."

I'm beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.

Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.

This is why deleting the comments was the bigger risk: doing so makes people think (incorrectly) that EY and this movement are the bigger risk, instead of unfriendly AI.

The problem is, are you people sure you want to take this route? If you are serious about all this, what would stop you from killing a million people if your probability estimates showed that there was a serious risk posed by those people?

If you read this comment thread you'll see what I mean and what danger there might be posed by this movement, 'follow Eliezer', 'donating as much as possible to SIAI', 'kill a whole planet', 'afford to leave one planet's worth', 'maybe we could even afford to leave their brains unmodified'...lesswrong.com sometimes makes me feel more than a bit uncomfortable, especially if you read between the lines.

Yes, you might be right about all the risks in question. But you might be wrong about the means of stopping the same.

9Blueberry11yI'm not sure if this was meant for me; I agree with you about free speech and not deleting the posts. I don't think it means EY and this movement are a great danger, though. Deleting the posts was the wrong decision, and hopefully it will be reversed soon, but I don't see that as indicating that anyone would go out and kill people to help the Singularity occur. If there really were a Langford Basilisk, say, a joke that made you die laughing [http://en.wikipedia.org/wiki/The_Funniest_Joke_in_the_World], I would want it removed. As to that comment thread: Peer is a very cool person and a good friend, but he is a little crazy and his beliefs and statements shouldn't be taken to reflect anything about anyone else.

I know, it wasn't my intention to discredit Peer, I quite like his ideas. I'm probably more crazy than him anyway.

But if I can come up with such conclusions, who else will? Also, why isn't anyone out to kill people, or will be? I'm serious, why not? Just imagine EY found out that we can be reasonable sure, for example Google, would let loose a rogue AI soon. Given how the LW audience is inclined to act upon 'mere' probability estimates, how wouldn't it be appropriate to bomb Google, given that was the only way to stop them in due time from turning the world into a living hell? And how isn't this meme, given the right people and circumstances, a great danger? Sure, me saying EY might be a greater danger was nonsense, just said to provoke some response. By definition, not much could be worse than uFAI.

This incident is simply a good situation to extrapolate. If a thought-experiment can be deemed to be dangerous enough to be not just censored and deleted but for people to be told not even to seek any knowledge of it, much less discuss it, I'm wondering about the possible reaction to a imminent and tangible danger.

So does Rain, but I guess that's no big deal.

Maybe I'm not understanding the situation, but I perceived a lot more confusion than disrespect coming from Rain in this thread.

::pauses::

At this point I'd normally try to give some sort of advice, but, to be honest, at this point I don't know what I might say that would be helpful to you, or if you would even want me to try. The best analogy I can come up with is that of a foreigner trying to adapt to a different language and culture, such as an American trying to learn Japanese and live in Japan; us "natives" have a devil of a time trying to explain to someone what we take for granted and don't actually have explicit knowledge of.

For example, this is an explanation of "politeness levels" in the Japanese language, as seen from an outsider's perspective:

Depending on who you are speaking to, your politeness level will be very different. The correct level of politeness depends on the age of the speaker, age of the person being spoken to, time of day, astrological sign, blood type, sex, whether they are Grass or Rock Pokemon type, color of pants, and so on. For an example of Politness Levels in action, see the ex

... (read more)

Since I consider these questions to be fairly loaded, I'm going to break them down and provide my own understanding of what they say before I give an answer. Please feel free to correct my analysis.

Have you ever imagined what it would be like not to understand why people react the way they do, all while others, never you, are given slack for having a worse understanding? And then to find that they're predicating their actions on misinformation that they refuse to update, no matter what you show them? And if you have, are you proud of the way you've treated a person in that position?

My rephrasing: Simulate a person, AM, who, when interacting with other people, is punished for their behavior. Assume that AM does not know why they are being punished. AM also witnesses other people performing substantially similar actions and not receiving the same punishment. AM then comes to the belief that these people are wrong in their application of punishment, and unwilling to admit they are wrong to punish, regardless of what actions AM takes to correct them. Now that you've simulated AM and gone through the scenario as described, take a moment to search through your memory for people who m... (read more)

It seems to me that "emergence" has a useful meaning once we recognize the Mind Projection Fallacy:

We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, but we don't know how to connect one to the other. (Like "confusing", it exists in the map but not the territory.)

This matches the usage: the ideal gas laws aren't "emergent" since we know how to derive them (at a physics level of rigor) from lower-level models; however, intelligence is still "emergent" for us since we're too dumb to find the lower-level patterns in the brain which give rise to patterns like thoughts and awareness, which we have high-level heuristics for.

Thoughts? (If someone's said this before, I apologize for not remembering it.)

8Roko11yNo, I want my definition of "emergent" to say that the ideal gas laws are emergent properties of molecules. Why not just say We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description
2Liron11yThe high-level structure shouldn't be the same as the low level structure, because I don't want to say a pile of sand emerges from grains of sand.
7Unnamed11yIt's worth checking on the Stanford Encyclopedia of Philosophy [http://plato.stanford.edu/entries/properties-emergent/] when this kind of issue comes up. It looks like this view - emergent=hard to predict from low-level model - is pretty mainstream. The first paragraph of the article on emergence says that it's a controversial term with various related uses, generally meaning that some phenomenon arises from lower-level processes but is somehow not reducible to them. At the start of section 2 ("Epistemological Emergence"), the article says that the most popular approach is to "characterize the concept of emergence strictly in terms of limits on human knowledge of complex systems." It then gives a few different variations on this type of view, like that the higher-level behavior could not be predicted "practically speaking; or for any finite knower; or for even an ideal knower." There's more there, some of which seems sensible and some of which I don't understand.
7Roko11yIt seems problematic that as soon as you work out how to derive high-level behavior from low-level behavior, you have to stop calling it emergent. It seems even more problematic that two people can look at the same phenomenon and disagree on whether it's "emergent" or not, because Bob knows the relevant derivation of high level behavior from low level behavior, but Alice doesn't, even if Alice nows that Bob knows. Perhaps we could refine this a little, and make emergence less subjective, but still avoid mind-projection-fallacy. We say that a system X has emergent behavior if there exists an exact and simple low-level description and an inexact but easy-to-compute high-level description, and the derivation of the high-level laws from the low-level ones is much more complex than either. [In the technical sense of kolmogorov complexity] (Like "Has chaotic dynamics", it is a property of a system)
6orthonormal11yI dunno, I kind of like the idea that as science advances, particular phenomena stop being emergent. I'd be very glad if "emergent" changed from a connotation of semantic stop-sign [http://lesswrong.com/lw/it/semantic_stopsigns/] to a connotation of unsolved problem.
2[anonymous]11yBy your definition, is the empirical fact that one tenth of the digits of pi are 1s emergent behavior of pi? I may not understand the work that "low-level" and "high-level" are doing in this discussion. On the length of derivations, here are some relevant Godel cliches: System X (for instance, arithmetic) often obeys laws that are underivable. And it often obeys derivable laws of length n whose shortest derivation has length busy-beaver-of-n. (Uber die lange von Beiwessen is the title of a famous short Godel paper. He revisits the topic in a famous letter to von Neumann, available here: http://rjlipton.wordpress.com/the-gdel-letter/ [http://rjlipton.wordpress.com/the-gdel-letter/])
3apophenia11yJust a pedantic note: pi has not been proven normal. Maybe one fifth of the digits are 1s.
4[anonymous]11yI'll stick to it. It's easier to perform experiments than it is to give mathematical proofs. If experiments can give strong evidence for anything (I hope they can!), then this data can give strong evidence that pi is normal: http://www.piworld.de/pi-statistics/pist_sdico.htm [http://www.piworld.de/pi-statistics/pist_sdico.htm] Maybe past ten-to-the-one-trillion digits, the statistics of pi are radically different. Maybe past ten-to-the-one-trillion meters, the laws of physics are radically different.
2wedrifid11yThe later case seems more likely to me.
3JoshuaZ11yThe only problem with that seems to be that when people talk about emergent behavior they seem to be more often than not talking about "emergence" as a property of the territory, not a property of the map. So for example, someone says that "AI will require emergent behavior"- that's a claim about the territory. Your definition of emergence seems like a reasonable and potentially useful one but one would need to be careful that the common connotations don't cause confusion.

Suppose I were to threaten to increase existential risk by 0.0001% unless SIAI agrees to program its FAI to give me twice the post-Singuarity resource allocation (or whatever the unit of caring will be) that I would otherwise receive. Can see why it might have a policy against responding to threats? If Eliezer does not agree with you that censorship increases existential risk, he might censor some future post just to prove the credibility of his precommitment.

If you really think censorship is bad even by Eliezer's values, I suggest withdrawing your threat and just try to convince him of that using rational arguments. I rather doubt that Eliezer has some sort of unfixable bug regarding censorship that has to be patched using such extreme measures. It's probably just that he got used to exercising strong moderation powers on SL4 (which never blew up like this, at least to my knowledge), and I'd guess that he has already updated on the new evidence and will be much more careful next time.

6wedrifid11yI do not expect that (non-costly signalling by someone who does not have significant status) to work any more than threats would. A better suggestion would be to forget raw threats and consider what other alternatives wfg has available by which he could deploy an equivalent amount of power that would have the desired influence. Eliezer moved the game from one of persuasion (you should not talk about this) to one about power and enforcement (public humiliation, censorship and threats). You don't take a pen to a gun fight.
2Wei_Dai11yI don't understand why, just because Eliezer chose to move the game from one of persuasion to one about power and enforcement, you have to keep playing it that way. If Eliezer is really so irrational that once he has exercised power on some issue, he is no longer open to any rational arguments on that topic, then what are we all doing here? Shouldn't we be trying to hinder his efforts (to "not take over the world") instead of (however indirectly) helping him?

How much experience do you have with various online communities?

I've found that those with somewhat strict moderation by sane people have better discussion than those with little or no moderation.

I think "freedom of speech" has different connotations, and different consequences in online communities, compared to the real world. Anonimity makes a big difference, as does the possibility of leaving and joining another community, or the fact that "real-life consequences" are much smaller.

I'm seriously considering brainstorming a list of easy ways to increase existential risks by %0.0001, and then performing one at random every time I hear such a reduction cited as the reason for silliness like this.

(Deleting this post, or the one I'm replying to, would count),

I'm not sure I understand what you mean here - are you saying that you are willing to try to increase existential risk by %0.0001 if someone deletes your post ???

If so, you're a fucking despicable dick. But I may have misunderstood you.

8wedrifid11yThat particular incident was not one in which Eliezer came across as sane (or stable). I don't believe moderation itself is the subject of wfg's criticism.

That ... may be true. I'm not very interested in putting Eliezer on trial, its the kind of petty politics I try to avoid. He seems to be doing a pretty good job of teaching interesting and useful stuff, setting up a functional community, and defending unusual ideas while not looking like a total loon. I don't think he needs "help" from any back seat drivers.

The impression I got of the whole Roko fiasco was that Eliezer was more concerned with avoiding nightmares in people he cared about than with the repercussions of Roko's post on existential risk. But I didn't dig into it very much - as I said, I'm not very interesting in he said / she said bickering. So I may be wrong in my impressions.

2waitingforgodel11yHey Emile, Please check out my other comments on this thread [http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2o1v?c=1] before replying, as it sounds like my reasoning isn't fully clear to you. Re: policing an online community I agree that there a lot of options to consider about how LW should be run, and that if people don't like EY deleting their posts they're free to try and set up their own LW in parallel. I don't think it would be a good thing, or something we should encourage, but I agree it's an option. I also agree that some policing can help prevent a negative community from developing -- that's one reason I was glad to see that LW went with the reddit platform. It's great at policing. I think it's a big part of what makes LW is so successful. That said, I also think that users should try other options rather than simply giving up on LW if they don't like what's going on. That's what I'm doing here. Re: 0.0001% You didn't misunderstand me about the whole post deletion thing. To my mind 0.0001% isn't that much compared to what the post deletion means about the future of LW. All this cloak-and-dagger silliness hurts the community. I'm doing my part to avoid further damage. No one is going to delete it (I think? :p), so it doesn't really matter either way. -wfg

scary-cult-like-weirdos

To my mind 0.0001% isn't that much compared to what the post deletion means about the future of LW.

You're threatening to on average kill at least 6000 people, in order to get the moderation policy you prefer. You're also not completely insensitive to how people appear to others. Would you like to reconsider how you've been going about achieving your aims?

I find it hard to relate to the way of thinking of someone who's willing to increase the chances that humanity goes extinct if someone deletes his post from a forum on the internet.

Please go find another community to "help" with this kind of blackmail.

If I understand him correctly, what he's trying to do is to precommit to doing something which increases ER, iff EY does something that he (wfg) believes will increase ER by a greater amount. Now he may or may not be correct in that belief, but it seems clear that his motivation is to decrease net ER by disincentivizing something he views as increasing ER.

1waitingforgodel11yRight. Thanks for this post. People keep responding with knee-jerk reactions to the implementation rather than thought out ones to the idea :-/ Not that I can blame them, this seems to be an emotional topic for all of us.

I would call the police, who would track you down and verify that you were bluffing.

And you'd probably be cited for wasting police time. This is the most ridiculous statement I've seen on here in a while.

Well, it's pretty close. Not perfect, but close.

Most people do seem to have a functional understanding of respectful speech, even if they can't articulate it verbally. (And knowing something without being able to explain it isn't weird. Most people, when learning to speak their first language, learn how to follow rules of grammar without being able to say what those rules are. Similarly, most people can't explain how to walk, either.) So if I had not interacted with you before, I would indeed expect that you would be able to distinguish between a respectful-seeming post and a disrespectful-seeming post, and be able to reliably produce respectful-seeming posts, even if you couldn't tell me how you did it. However, you have demonstrated that you are an unusual individual who has a "broken respectfulness detector", so to speak, so I no longer expect you to refrain from making disrespectful posts. I might wish that you were more respectful, but I might as well wish for a pony as well while I'm at it.

And I don't know if "disrespectful" is even the right word. "Asshole-ish" might be more accurate. And rather than "respectful" you might want to try ... (read more)

7SilasBarta11yI don't watch House nor much TV at all. I'm a native English speaker, but people often do say I sound "foreign" (usually German, for some reason) and that I speak with a more "intelligent" and "upper class" tone. I remember messaging you a lot a while back, noting your eerie similarities to me in terms of personal experience. AFAICT, the only real differences between you and me are: * You show more restraint. * I'm less proactive about my Japanophilia. * I didn't give up after I found myself unemployed and living with my parents (though, ashamedly, it was more out of hatred that I didn't give up than any noble kind of willpower). As for being less asshole-ish, I do believe I can pull it off with minimal effort. The problem is that I cannot be significantly less asshole-ish, while also 1) impressing on others the importance of updating in my direction, and/or 2) posting like everyone else does, i.e., if I made my posts less asshole-ish, I would have to avoid making posts like Rain's recent ones, due to mistakenly classifying them as asshole-ish. Regarding 1), a lot of you believe that my tone of posting is likely to do the opposite, as it turns people off from agreeing with me. While that might be true for social issues like who-"likes"-Silas, I strongly disagree that it holds on substantive issues. I've spent a lot of my internet posting career , years ago, being "nice" in arguments, which, yes, I'm capable of. I found that, rather than turn people on to my views, it simply legitimized, in their minds, the ridiculous positions they were taking, and allowed them to confidently go away believing it was just a case of "reasonable people disagreeing about a tough issue". In contrast, when I used my regular, "asshole-ish" tone, then yes, at the time they resisted my point with all the rationalization they could muster. But shortly afterward, they'd quietly accept it without admitting defeat, and argue in favor of it later. For example, I'm famous, under

I've spent a lot of my internet posting career , years ago, being "nice" in arguments, which, yes, I'm capable of. I found that, rather than turn people on to my views, it simply legitimized, in their minds, the ridiculous positions they were taking, and allowed them to confidently go away believing it was just a case of "reasonable people disagreeing about a tough issue".

My experience is different. I used to post online in a more critical and uncompromising fashion. Yet over the years, I came around to a more pleasant and accommodating style, and I find that it works better. Even though I have to swallow things I would like to say on the spot, I often look back on the thread later and feel glad that I took the high road.

I can't convince everyone that I debate with, but I've managed to pull a bunch of people in my direction, which I don't think I could have accomplished with a more bombastic style. Furthermore, my view is that even if I can't convince a particular person I'm debating with, I can still convince the lurking fence-sitters.

Do I want to murder my karma to point out flaws in Alicorn's advice, or do I want to be "part of the tribe"?

In a top-level post that you will remember, I criticized Alicorn's advice in a way that only gained me karma. This sort of thing can be done.

Why do you want to change minds? Is there any chance that you could abandon that value? Because I believe, paradoxically, that it would help you achieve that same value :-)

I'm probably better than you at general social skills, but the first drafts of my comments sometimes sound disturbingly like yours. Then I notice that excessive subconscious desire to change minds interfered with my clarity of thought, and rethink/rewrite the comment. I want the "ideal commenter me" to never care who said what, who's right and who's wrong, etc. My perfect comment should make a clear, correct, context-free statement that improves the discussion, and do absolutely nothing else. I consciously try to avoid saying things like "you're wrong", saying instead "the statement you propose doesn't work because..." or even better "such-and-such idea doesn't work because...".

Ironically, people do often change their minds when talking to me about topics I understand well. But I'm not setting out to do it. Actually I have an explicit moral system, worked out from painful experience, that says it's immoral for me to try to convince anyone of anything. I try to think correct thoughts and express them clearly, and let other people make conclusions for themselves.

8Wei_Dai11yCan you explain what that painful experience was? Because other people seemed to have learned from their past experience that being "cocky" led to good results instead of bad. (I know someone else who tried to participate on Less Wrong and stopped after being frequently downvoted due to apparent overconfidence, and his explanation was very similar to Silas Barta's, i.e., his style is effective in other online forums that he participates in.)
7cousin_it11yWhen my job and my family self-destructed at the same time, I realized that I had no major personal successes because I'd blindly believed in others' goals and invested all my effort in them. Then I looked over my past to find occurrences where I'd made others worse off by manipulating their motivations, and found plenty of such occurrences. So I resolved to cut this thing out of my life altogether, never be the manipulator or the manipulatee. This might be an overcorrection but I feel it's served me very well in the 4 years since I adopted it. A big class of negative emotions and unproductive behaviors is just gone from my life. Other people notice it too, making compliments that I'm "unusual" and exceptionally easy to be with.
2NancyLebovitz11yHow do you identify it when others are attempting to manipulate you?
5cousin_it11yThis sort of question is always difficult to answer... How does one identify that a shoe is a shoe? I seem to have something like "qualia" for manipulation. Someone says something to me and I recognize a familiar internal "pull": a faint feeling of guilt, and a stronger feeling of being carefully maneuvred to do some specific action to avoid the guilt, and a very strong feeling that I must not respond in any way. Then I just allow the latter feeling to win. It took a big conscious effort at first, but by now it's automatic.
2Relsqui11yHas this caused you difficulty in social situations where a certain degree of manipulation is usually considered acceptable? I'm thinking of cases where someone is signalling that they want a hug, or a compliment, or to be asked after. Certainly it would be nice if people stated their needs clearly in those situations, but a) that's not "normal" in our culture and a lot of people never consider it, b) it's sometimes very difficult even when you know it's an option, and c) ignoring people in those situations won't lead them to be clearer next time, it just makes it seem like you don't care about their distress.
2cousin_it11ySignaling that you want a hug isn't manipulation in my book, it's just nonverbal communication. But I can't be guilt-tripped into a hug or a compliment.
3Relsqui11yFair enough. But I'm not sure where the lines are between those things. That question might be a good addition to this post [http://lesswrong.com/r/discussion/lw/2t6/expanded_everyday_questions_wanting_rational/] .
1TheValliant11yI am very glad to hear of someone else who had a similar experience and made a similar choice. While it may be an overreaction, I think that it is not an inappropriate way to live one's life.
1cousin_it11yExpand? I'd be interested to hear similar stories.
3TheValliant11yI had been socially maladjusted, but then found that I could be charming and manipulate people rather effectively. I took advantage of this for perhaps a year, but began to feel guilty for my manipulations. I began to realized I was changing who they were without their permission and without them being able to stop me. Once I had realized that I was for all intents and purposes emotionally violating people, I swore it off entirely. If I cannot make my point and convince someone of something through the facts (or shared consensus, for debates that aren't based on facts,) I stop. I hope this was informative. If you have more detailed questions I would be glad to expand, but I haven't thought about this in a few years and I don't know that I summed it up completely.
3wedrifid11yAnother belief that is worth changing is 'conversations should be fair'. Having no expectations of others beyond bounded Machiavellian interaction can allow one to guide a conversation in a far more healthy direction.
1NancyLebovitz11yWhat do you mean by 'bounded Machiavellian interaction'?
4wedrifid11yI'm being terse to the point of being outrright opaque but I mean that if you expect people to try maximise their own status (and power in general) in all their interactions rather than trying to be reasonable or fair then you will find conflict laden conversations less frustrating. I say 'bounded' because people aren't perfect machiavellian agents even when they try to be - you need to account for stupidity as well as political motivations.
7NancyLebovitz11yThis reminds me of one of my heuristics-- if you claw at people, it's reasonable to expect them to claw back. This doesn't mean "never claw at people". It just means "don't add being offended at them clawing back to the original reasons you had for clawing at them".
1katydee11yI'm very interested in this system, as it matches some of my own recent moral insights. How exactly did you go about implementing it?
1cousin_it11yDo you have problems with detecting manipulation in yourself and others, or problems stopping it when you've detected it?
1katydee11yDepends on what you mean by manipulation. I can (obviously) easily detect falsehood in myself, and have more or less suppressed it. I can also easily detect and suppress "technical truth" answers and other methods of deception. However, I think I need to work on detecting manipulation in others and resisting its effects. I'm pretty good at resisting flattery, but I'm sure that there are more subtle methods out there that I am unaware of and therefore susceptible to.
1cousin_it11yFor me the biggest problem was guilt-trips, not flattery.

So therein lies the problem: do I want to change minds, or do I want to be liked? Do I want to murder my karma to point out flaws in Alicorn's advice, or do I want to be "part of the tribe"? I think you know what decision I've made, and why you haven't done the same.

Now I understand better! You're Gilbert and Sullivan's Disagreeable Man! ;)

If you give me your attention, I will tell you what I am:
I'm a genuine philanthropist - all other kinds are sham.
Each little fault of temper and each social defect
In my erring fellow-creatures, I endeavour to correct.
To all their little weaknesses I open people's eyes,
And little plans to snub the self-sufficient I devise;
I love my fellow-creatures - I do all the good I can -
Yet everybody says I'm such a disagreeable man!
And I can't think why!

To compliments inflated I've a withering reply,
And vanity I always do my best to mortify;
A charitable action I can skilfully dissect;
And interested motives I'm delighted to detect.
I know everybody's income and what everybody earns,
And I carefully compare it with the income-tax returns;
But to benefit humanity, however much I plan,
Yet everybody says I'm such a disagreeable man!
And I can't thi

... (read more)

I'd say that was a post which was convincing without being obnoxious.

You raise an interesting point. I think it's possible to be forceful and polite at the same time, but the rules for doing so are less obvious (at least to me) than the rules for being polite.

Anyone have ideas about how that combination works?

One general rule is "be harsh on the issue and soft on the person" (from Getting to Yes).

For instance, "every single part of your post struck me as being either a factual mistake, flawed reasoning, or gratuitous allusion to an irrelevant topic" is forceful but (if actually sincere and backed up with argument) conveys no disrespect for the author. We're (almost) all human here, and so have brain farts every so often. Claiming that your interlocutor has made a mistake or a dozen is both fair and constructive.

By contrast, "so basically you like to make gratuitous references to Japanese culture" is insulting to your interlocutor, even as it leaves the issue unaddressed: you are implying (though not outright saying) that the allusion to Japanese culture was not relevant to the argument. The cooperative assumption is that your interlocutor thought otherwise, but you're implying that they brought up something irrelevant on purpose.

I can attest from personal experience that the rule works well in situations of negotiation, which definitely are about changing your mind (both yours and the interlocutor's, since if either refuses to budge, the negotiation will fail).

I doubt that being an asshole, in and of itself, ever helps.

9jimrandomh11yAgreed, but I think the respectfulness of this quote can be improved further, by replacing "your post" with "this post". It seems silly and doesn't change the semantic content at all, but de-emphasizing the connection between a post and its author by avoiding the second person serves to dampen status effects and make it easier for the other person to back down or withdraw from the conversation.
3NancyLebovitz11yI've framed it as "treat everyone as though they're extremely thin-skinned egomaniacs", and at this point I'm experimenting with being a little less cautious, just for my own sanity. However, it's true that a lot of people are very distracted by insults, and there's no point in saying that they should be tougher.
5EStokes11yI think that starting off acting somewhat lower status and underplaying your confidence in the start, at least, can work. It makes it feel less like an attack on the other person and can maybe make it feel more like they're awesome rationalists for being so quick to see the evidence when it's presented (assuming, of course, that you're right.) And if you're wrong, again, it won't feel like an attack on them, and they'll be more likely to present why they're right in a way that shows your idea as an honest, easily-made mistake instead of harshly steamrolling your arguments and making you look like a dullard. ETA: If you are right, but the person doesn't see it after your first comment, then the "awesome rationalist quickly accepting evidence" feeling can take a hit. To make them still feel that, it might be a good idea to extrapolate/present more points and apologize for not being clear. Just a quick "Ah, sorry, I wasn't clear. What I meant to say was blah blah blah" should work. Keeping deferential should be remembered. And if they're right, and you didn't understand it, hopefully caution will have prevented it from escalating any. The more of a status war it becomes/The harder it is to save face, the harder it becomes to convince the other person and get them to agree. Though there's always the chance that you convince them but they won't admit it because they'll lose face. ("You" is used as one/anyone/people in general, of course.)

In contrast, when I used my regular, "asshole-ish" tone, then yes, at the time they resisted my point with all the rationalization they could muster. But shortly afterward, they'd quietly accept it without admitting defeat, and argue in favor of it later.

Note that this does not automatically mean that it was you who changed their minds. I've had similar experiences to you, but I just assume that it means reality in the long run is more convincing than I am in the short run. It's really pretty narcissistic to assume that you're changing anybody's mind about anything, regardless of what voice you use. ;-)

Also, your assertion that you have only two modes of discourse (ineffectual-nice or effective-asshole) is a false dichotomy. Aside from the fact that there are more than two ways to speak, it leaves out any evaluation of who the target audience is -- which is likely to have as much or more impact on the effective/ineffective axis than whether you're nice!

5orthonormal11yI second cousin_it here. If your goal is to persuade (and especially if you care about persuading third parties), then your methods may be counterproductive even if they seem more effective to you. (For a classical example, take the dialogues of Socrates: polite and deferential to a fault, never directly convincing the antagonist, but winning points in the eyes of observers until the antagonist is too shamed to continue. I don't find this to be the ideal of Internet argument, but it would be an improvement.) It certainly feels better to berate a fool on the Internet than to be more detached, and you might indeed generate success stories from time to time; but you may actually be less effective at swaying the bulk of opinion. I find that I'm often hesitant to even read your long comments in the first place, due to their usual tone. It's your call how to behave on Less Wrong, but it's my call whether to downvote your comments, based on whether they represent the sort of discourse I want to see here.
1wedrifid11yThirded. At times such methods give people an excuse to use even worse arguments and engage in more detrimental social-political gambits than they would otherwise have gotten away with. Observers are hesitant to intervene to penalise bullshit [http://press.princeton.edu/titles/7929.html] when to do so will affiliate them with a low status display. The dykes that maintain the sanity water level are damaged. Even when others wish to intervene on the side of reason in such cases it is extremely hard work. You have to be ten times more careful, unambiguous and polite than normal in order to achieve the same effect.
1NancyLebovitz11yIt's more complicated than "courtesy = status". It's easiest for me to observe status online, and it seems to me that I've seen more high status people who flame occasionally than those who never do.
1wedrifid11yI agree, and my observations match yours.
4xamdam11yDo you notice that you ooze aggression? Just look at the sentence above. It will clearly be interpreted as "intended to hurt" by 99% of observers. Update. A general advice on being more civil: in any post (or a list of things) that you write, erase the last item before posting. It's usually the one you sneak aggression into. I think it will do wonders to your karma. ETA: quickly edited to remove some of my own unproductive aggression
8Blueberry11yThough I'd agree that Silas oozes aggression sometimes, I didn't find that sentence "intended to hurt" at all, possibly because I'd read CronoDAS's previous comments discussing this issue.
8CronoDAS11yFor the record, I'm in no way offended by that particular remark.
7SilasBarta11yNormally, I wouldn't have said something like that, but Crono and I have talked about our lives, and he knows where I stand on that issue and has updated accordingly. I wasn't springing anything new on him by saying that. This doesn't, of course, change the fact that I should have accounted for how people wouldn't be aware of this and would interpret it as meaner than it really is.
8xamdam11yYou should also be aware that what you say to a person openly in private and what you say in public evoke different levels of emotions in the same recipient, especially if their private interpretation/rationalization of the facts will not be the public default. There is special hell in some religions for embarrassing someone in public, and there is some sense to that.
2wedrifid11yAdd me as a yet another example of an observer who doesn't share that interpretation. Also add me as an example of an observer who considers your demand to update patronising and itself an instance of social aggression.

Why are Roko's posts deleted? Every comment or post he made since April last year is gone! WTF?

Edit: It looks like this discussion sheds some light on it. As best I can tell, Roko said something that someone didn't want to get out, so someone (maybe Roko?) deleted a huge chunk of his posts just to be safe.

I've deleted them myself. I think that my time is better spent looking for a quant job to fund x-risk research than on LW, where it seems I am actually doing active harm by staying rather than merely wasting time. I must say, it has been fun, but I think I am in the region of negative returns, not just diminishing ones.

So you've deleted the posts you've made in the past. This is harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable.

For example, consider these posts, and comments on them, that you deleted:

I believe it's against community blog ethics to delete posts in this manner. I'd like them restored.

Edit: Roko accepted this argument and said he's OK with restoring the posts under an anonymous username (if it's technically possible).

And I'd like the post of Roko's that got banned restored. If I were Roko I would be very angry about having my post deleted because of an infinitesimal far-fetched chance of an AI going wrong. I'm angry about it now and I didn't even write it. That's what was "harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable." That's what should be against the blog ethics.

I don't blame him for removing all of his contributions after his post was treated like that.

9[anonymous]11yIt's also generally impolite (though completely within the TOS) to delete a person's contributions according to some arbitrary rules. Given that Roko is the seventh highest contributor to the site, I think he deserves some more respect. Since Roko was insulted, there doesn't seem to be a reason for him to act nicely to everyone else. If you really want the posts restored, it would probably be more effective to request an admin to do so.
8cousin_it11yIt's ironic that, from a timeless point of view, Roko has done well. Future copies of Roko on LessWrong will not receive the same treatment as this copy did, because this copy's actions constitute proof of what happens as a result. (This comment is part of my ongoing experiment to explain anything at all with timeless/acausal reasoning.)
3bogus11yWhat "treatment" did you have in mind? At best, Roko made a honest mistake, and the deletion of a single post of his was necessary to avoid more severe consequences (such as FAI never being built). Roko's MindWipe was within his rights, but he can't help having this very public action judged by others. What many people will infer from this is that he cares more about arguing for his position (about CEV and other issues) than honestly providing info, and now that he has "failed" to do that he's just picking up his toys and going home.
1wedrifid11yI just noticed this. A brilliant disclaimer!
5rhollerith_dot_com11yParent is inaccurate: although Roko's comments are not, Roko's posts (i.e., top-level submissions) are still available, as are their comment sections minus Roko's comments (but Roko's name is no longer on them and they are no longer accessible via /user/Roko/ URLs).

Not via user/Roko or via /tag/ or via /new/ or via /top/ or via / - they are only accessible through direct links saved by previous users, and that makes them much harder to stumble upon. This remains a cost.

8[anonymous]11yCould the people who have such links post them here?

I understand. I've been thinking about quitting LessWrong so that I can devote more time to earning money for paperclips.

1jsalvatier11ylol

I'm deeply confused by this logic. There was one post where due to a potentially weird quirk of some small fraction of the population, reading that post could create harm. I fail to see how the vast majority of other posts are therefore harmful. This is all the more the case because this breaks the flow of a lot of posts and a lot of very interesting arguments and points you've made.

ETA: To be more clear, leaving LW doesn't mean you need to delete the posts.

6daedalus2u11yI am disapointed. I have just started on LW, and found many of Roko's posts and comments interesting and consilient with my current and to be a useful bridge between aspects of LW that are less consilient. :(
[-][anonymous]11y 17

Allow me to provide a little context by quoting from a comment, now deleted, Eliezer made this weekend in reply to Roko and clearly addressed to Roko:

I don't usually talk like this, but I'm going to make an exception for this case.

Listen to me very closely, you idiot.

[paragraph entirely in bolded caps.]

[four paragraphs of technical explanation.]

I am disheartened that people can be . . . not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.

This post was STUPID.

Although it does not IMHO make it praiseworthy, the above quote probably makes Roko's decision to mass delete his comments more understandable on an emotional level.

In defense of Eliezer, the occasion of Eliezer's comment was one in which IMHO strong emotion and strong language might reasonably be seen as appropriate.

If either Roko or Eliezer wants me to delete (part of all of) this comment, I will.

EDIT: added the "I don't usually talk like this" paragraph to my quote in repsonse to criticism by Aleksei.

I'm not them, but I'd very much like your comment to stay here and never be deleted.

2timtyler11yYour up-votes didn't help, it seems.
1cousin_it11yWoah. Thanks for alerting me to this fact, Tim.
5Aleksei_Riikonen11yDoes not seem very nice to take such an out-of-context partial quote from Eliezer's comment. You could have included the first paragraph, where he commented on the unusual nature of the language he's going to use now (the comment indeed didn't start off as you here implied), and also the later parts where he again commented on why he thought such unusual language was appropriate.
5[anonymous]11yI'm still having trouble seeing how so much global utility could be lost because of a short blog comment. If your plans are that brittle, with that much downside, I'm not sure security by obscurity is such a wise strategy either...

I see. A side effect of banning one post, I think; only one post should've been banned, for certain. I'll try to undo it. There was a point when a prototype of LW had just gone up, someone somehow found it and posted using an obscene user name ("masterbater"), and code changes were quickly made to get that out of the system when their post was banned.

Holy Cthulhu, are you people paranoid about your evil administrator. Notice: I am not Professor Quirrell in real life.

EDIT: No, it wasn't a side effect, Roko did it on purpose.

Notice: I am not Professor Quirrell in real life.

Indeed. You are open about your ambition to take over the world, rather than hiding behind the identity of an academic.

Notice: I am not Professor Quirrell in real life.

And that is exactly what Professor Quirrell would say!

Professor Quirrell wouldn't give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.

5wedrifid11yOf course as you know very well. :)

A side effect of banning one post, I think;

In a certain sense, it is.

7JoshuaZ11yOf course, we already established that you're Light Yagami [http://lesswrong.com/lw/12s/the_strangest_thing_an_ai_could_tell_you/xc5?c=1].
4thomblake11yI'm not sure we should believe you.

Cryo-wives: A promising comment from the NYT Article:

As the spouse of someone who is planning on undergoing cryogenic preservation, I found this article to be relevant to my interests!

My first reactions when the topic of cryonics came up (early in our relationship) were shock, a bit of revulsion, and a lot of confusion. Like Peggy (I believe), I also felt a bit of disdain. The idea seemed icky, childish, outlandish, and self-aggrandizing. But I was deeply in love, and very interested in finding common ground with my then-boyfriend (now spouse). We talked, and talked, and argued, and talked some more, and then I went off and thought very hard about the whole thing.

Part of the strength of my negative response, I realized, had to do with the fact that my relationship with my own mortality was on shaky ground. I don't want to die. But I'm fairly certain I'm going to. Like many people, I've struggled to come to a place where I can accept the specter of my own death with some grace. Humbleness and acceptance in the face of death are valued very highly (albeit not always explicitly) in our culture. The companion, I think, to this humble acceptance of death is a humble (and painful) acce

... (read more)

That is really a beautiful comment.

It's a good point, and one I never would have thought of on my own: people find it painful to think they might have a chance to survive after they've struggled to give up hope.

One way to fight this is to reframe cryonics as similar to CPR: you'll still die eventually, but this is just a way of living a little longer. But people seem to find it emotionally different, perhaps because of the time delay, or the uncertainty.

6Eliezer Yudkowsky11yI always figured that was a rather large sector of people's negative reaction to cryonics; I'm amazed to find someone self-aware enough to notice and work through it.
4Will_Newsome11yHopefully this provides incentive for people to kick Eliezer's ass at FAI theory. You don't want to look cultish, do you?

I thought Less Wrong might be interested to see a documentary I made about cognitive bias. It was made as part of a college project and a lot of the resources that the film uses are pulled directly from Overcoming Bias and Less Wrong. The subject of what role film can play in communicating the ideas of Less Wrong is one that I have heard brought up, but not discussed at length. Despite the film's student-quality shortcomings, hopefully this documentary can start a more thorough dialogue that I would love to be a part of.

The link to the video is Here: http://www.youtube.com/watch?v=FOYEJF7nmpE

2[anonymous]11ydel
2John-Henry11yPen and paper interviews would almost certainly be more accurate. The problem being that images of people writing on paper are especially un-cinematic. The participants were encouraged to take as much time as they needed, many of which took several minutes before responding on some questions. However, the majority of them were concerned with how much time the interview would take up, and their quick responses were self imposed. Whether the evidence is too messy to draw firm conclusions from, I agree that it is. This is an inherent problem with documentaries. Omissions of fact are easily justified. Also, just like in fiction films, a higher degree of manipulation over the audience is more sought after than accuracy.
2RobinZ11yI just posted a comment over there noting that the last interviewee rediscovered anchoring and adjustment [http://lesswrong.com/lw/j7/anchoring_and_adjustment/].

I don't understand why you're so upset about LW posts being deleted, to the extent of being willing to increase existential risks just to prevent that from happening.

The US government censors child pornography, details of nuclear weapon designs, etc., with penalty of imprisonment instead of just having a post deleted. If you care so much about censorship, why do you not focus your efforts on it instead? (Not to mention other countries like China and North Korea.)

The US government censors child pornography, details of nuclear weapon designs, etc., with penalty of imprisonment instead of just having a post deleted. If you care so much about censorship, why do you not focus your efforts on it instead? (Not to mention other countries like China and North Korea.)

One reason would be if you believe that the act of suppressing a significant point of discussion of possible actions of an FAI matters rather a lot. "Don't talk about the possibility of my 'friendly' AI torturing people" isn't something that ought to engender confidence in a friendliness researcher.

I'm beginning to see what you're up against.

I hope other folks will chime in if they disagree with me, but I'd say that "Are you proud of yourself?" is always an attack, and specifically a parent-to-child sort of attack at that.

If you believe it's likely that an honest answer to a question is likely to leave the person answering it feeling really bad, then the question is an attack. At the same time, I think you're telling the truth when you say you're perplexed that it was taken as a loaded question.

I've got some guesses about what's going on with you, but it's getting into pretty personal territory. Let me know if you're interested, and if so, whether you'd prefer a public post or a private message.

2komponisto11yAgree, but Silas's actual question was which could be an honest inquiry.

You're right about the quote.

However, I think asking people if they're proud of what they've done when you've made it clear you don't approve of it and they haven't shown signs of pride is still an attack, though a milder one.

Geoff Greer published a post on how he got convinced to sign up for cryonics: Insert Frozen Food Joke Here.

3[anonymous]11yThis is really an excellent, down to the earth, one minute teaser, to go that route. Excellent writing. It would wish I had a follow up move for those who get interested after that points, but raised doubts, be it philosophical, religious, moral, scientific (the last one probably the easiest). I know those issues had been discussed already, but how could one react in a five minute coffee-break, when the co-worker responds (standard phrases to go): "But death gives meaning to live. And if nobody died, there would be too many people around here. Only the rich ones could get the benefits. And ultimately, whatever end the universe takes, we will all die, you know science, don't ya?" I know the sequence answers, but I utterly fail to give any non-embarrassing answer at such questions. It does not help to not being signed up for cryonics oneself.
2JoshuaZ11yIf they think that we'll all eventually die even with cryonics and they think that death gives meaning to life then they don't need to worry about cryonics removing meaning since it is just pushing the amount of time until death up. (I wouldn't bother addressing the death giving meaning to life claim except to note that it seems to be a much more common meme among people who haven't actually lost loved ones.) As to the problem of too many people, overpopulation is a massive problem whether or not a few people get cryonicly preserved. As to the problem of just the rich getting the benefits, patiently explain that there's no reason to think that the rich now will be treated substantially different from the less rich who sign up for cryonics. And if society ever has the technology to easily revive people from cryonic suspension then the likely standard of living will be so high compared to now that even if the rich have more it won't matter.
2Blueberry11yI talk about it as something I'm thinking about, and ask what they think. That way, it's not you trying to persuade someone, it's just a conversation. "Yeah, we'll all die eventually, but this is just a way of curing aging, just like trying to find a cure for heart disease or cancer. All those things are true of any medical treatment, but that doesn't mean we shouldn't save lives."

I'm not sure that blackmail is a good name to use when thinking about my commitment, as it has negative connotations and usually implies a non-public, selfish nature.

More importantly, you aren't threatening to publicize something embarrassing to Eliezer if he doesn't comply, so it's technically extortion.

8Wei_Dai11yI think by "blackmail" Eliezer meant to include extortion since the scenario that triggered that comment was also technically extortion.

I think the "repeating myself" that you refer to is quite excusable, though, unless you have a better idea for what to do

My favorite option is disengagement (stop replying), though its implementation is mood dependent. Another path I often take is a complete rework of the topic from a different viewpoint: either offer a different anecdote or a different metaphor for the same point. Explain from a different angle.

At least everyone got a chance to show off how great they are, right?

This is an example of the tone thing I was talking about.

Are any LWer's familiar with adversarial publishing? The basic idea is that two researchers who disagree on some empirically testable proposition come together with an arbiter to design an experiment to resolve their disagreement.

Here's a summary of the process from an article (pdf) I recently read (where Daniel Kahneman was one of the adversaries).

  1. When tempted to write a critique or to run an experimental refutation of a recent publication, consider the possibility of proposing joint research under an agreed protocol. We call the scholars engaged in such an effort participants. If theoretical differences are deep or if there are large differences in experimental routines between the laboratories, consider the possibility of asking a trusted colleague to coordinate the effort, referee disagreements, and collect the data. We call that person an arbiter.
  2. Agree on the details of an initial study, designed to subject the opposing claims to an informative empirical test. The participants should seek to identify results that would change their mind, at least to some extent, and should explicitly anticipate their interpretations of outcomes that would be inconsistent with their theoret
... (read more)

you like to make gratuitous references to Japanese

This is you being an asshole. The earlier parts of the same comment are OK.

One thing that often makes your comments come off as asshole-ish is deliberate violation on your part of the Gricean cooperative principle, in its form as an assumption about other people's communications. You interpret your own communications in the best possible light and your interlocutors' in the worst.

but if I had the slightest doubt about that, then I would call the police, who would track you down and verify that you were bluffing.

You appear to be confused. Wfg didn't propose to murder 6700 people. You did mathematics by which you judge wfg to be doing something as morally bad as 6700 murders. That doesn't mean he is breaking the law or doing anything that would give you the power to use the police to exercise your will upon him.

I disapprove of the parent vehemently.

This seems like a highly suboptimal solution. It's an explicit attempt to remove Roko from the top contributors list... if you/we/EY feels that's a legitimate thing to do, well then we should just do it directly. And if it isn't a legitimate thing to do, then we shouldn't do it via gaming the karma system.

Since I assume he doesn't want to have existential risk increase, a credible threat is all that's necessary.

Perhaps you weren't aware, but Eliezer has stated that it's rational to not respond to threats of blackmail. See this comment.

(EDIT: I deleted the rest of this comment since it's redundant given what you've written elsewhere in this thread.)

This is true, and yes wfg did imply the threat.

(Now, analyzing not advocating and after upvoting the parent...)

I'll note that wfg was speculating about going ahead and doing it. After he did it (and given that EY doesn't respond to threats speculative:wga should act now based on the Roko incident) it isn't threat. It is then just a historical sequence of events. It wouldn't even be a particularly unique sequence events.

Wfg is far from the only person who responded by punishing SIAI in a way EY would expect to increase existential risk. ie. Not donating to SIAI when they otherwise would have.or by updating their p(EY(SIAI) is a(re) crackpot(s)) and sharing that knowledge. The description RationalWiki would be an example.

1timtyler11yI don't think he was talking about human beings there. Obviously you don't want a reputation for being susceptable to being successfully blackmailed, but IMHO, maximising expected utilily results in a strategy which is not as simple as never responding to blackmail threats.
2khafra11yI think this is correct. Eliezer's spoken from The Strategy of Conflict before, which goes into mathematical detail about the tradeoffs of precommitments against inconsistently rational players. The "no blackmail" thing was in regards to a rational UFAI.

(Deleting this post, or the one I'm replying to, would count),

Eliezer might delete it anyway, although I don't expect it. You made a threat, not an offer. If the fiasco with Roko didn't convince you that he takes decision theory seriously, what will?

This really isn't worth arguing and there isn't any reason to be angry...

You are wrong on both. There is strong signalling going on that gives good evidence regarding both Eliezer's intent and his competence.

What Roko said matters little, what Eliezer said (and did) matters far more. He is the one trying to take over the world.

The problem is, I was/am a very manipulative person by nature, so I really need the conscious overcorrection. Whenever I detect within myself a desire to change someone's opinion, I know how much it weakens my defense against making bad arguments. It's like writing emails late at night: in the process of doing it, I like the resulting text just fine, but I know from experience on a different level that I'm going to be ashamed when I reread it in the morning.

There's a course "Street Fighting Mathematics" on MIT OCW, with an associated free Creative Commons textbook (PDF). It's about estimation tricks and heuristics that can be used when working with math problems. Despite the pop-sounding title, it appears to be written for people who are actually expected to be doing nontrivial math.

Might be relevant to the simple math of everything stuff.

Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)? I can't guarantee an unbiased account, as I was a player. But I think it might be interesting, purely as an example where social situations and what should be done are not as simple as sometimes portrayed.

Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)?

I'm not sure it's that relevant to rationality, but I think most humans (myself included!) are interested in hearing juicy gossip, especially if it features a popular trope such as "high status (but mildly disliked by the audience) person meets downfall".

How about this division of labor: you tell us the story and we come up with some explanation for how it relates to rationality, probably involving evolutionary psychology.

Heh, that makes Roko's scenario similar to the Missionary Paradox: if only those who know about God but don't believe go to hell, it's harmful to spread the idea of God. (As I understand it, this doesn't come up because most missionaries think you'll go to hell even if you don't know about the idea of God.)

But I don't think any God is supposed to follow a human CEV; most religions seem to think it's the other way around.

I have been having this experience quite often, lately.

The solution I'm currently attempting is to stop myself from ever assigning blame, and then treating my failure to communicate/explain as a very difficult and interesting problem (which it is). There are people who consistently do much better than me at solving this problem, so my audience's failure or lack thereof is irrelevant to the possibility for improvement available to me.

I haven't completely implemented this mindset yet, but it seems to be helping so far.

I haven't completely implemented this mindset yet, but it seems to be helping so far.

Yes. Your responses to criticism are generally measured (even though perhaps you would have a right to be a bit less measured), and usually make you look better in terms of signaling. You maintain the high ground, and don't dig yourself into holes.

The attitude you have recently displayed in your posting history in response to criticism could serve as an example to Silas of an example of what to do (rather than people just telling him what not to do). Of course, your approach isn't the only way, but it contains elements that could be useful.

Daniel Dennett and Linda LaScola on Preachers who are not believers:

There are systemic features of contemporary Christianity that create an almost invisible class of non-believing clergy, ensnared in their ministries by a web of obligations, constraints, comforts, and community. ... The authors anticipate that the discussion generated on the Web (at On Faith, the Newsweek/Washington Post website on religion, link) and on other websites will facilitate a larger study that will enable the insights of this pilot study to be clarified, modified, and expanded.

Paul Graham on guarding your creative productivity:

I'd noticed startups got way less done when they started raising money, but it was not till we ourselves raised money that I understood why. The problem is not the actual time it takes to meet with investors. The problem is that once you start raising money, raising money becomes the top idea in your mind. That becomes what you think about when you take a shower in the morning. And that means other questions aren't. [...]

You can't directly control where your thoughts drift. If you're controlling them, they're not drifting. But you can control them indirectly, by controlling what situations you let yourself get into. That has been the lesson for me: be careful what you let become critical to you. Try to get yourself into situations where the most urgent problems are ones you want think about.

From a recent newspaper story:

The odds that Joan Ginther would hit four Texas Lottery jackpots for a combined $21 million are astronomical. Mathematicians say the chances are as slim as 1 in 18 septillion — that's 18 and 24 zeros.

I haven't checked this calculation at all, but I'm confident that it's wrong, for the simple reason that it is far more likely that some "mathematician" gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth. Discuss?

9Blueberry11yIt seems right to me. If the chance of one ticket winning is one in 10^6, the chance of four specified tickets winning four drawings is one in 10^24. Of course, the chances of "Person X winning the lottery week 1 AND Person Y winning the lottery week 2 AND Person Z winning the lottery week 3 AND Person W winning the lottery week 4" are also 10^24, and this happens every four weeks.
9whpearson11yFrom the article (there is a near invisible more text button) And she was the only person ever to have bought 4 tickets (birthday paradoxes and all)... I did see an analysis of this somewhere, I'll try and dig it up. Here [http://wmbriggs.com/blog/?p=2597] it is. There is hackernews commentary here [http://news.ycombinator.com/item?id=1493784]. I find this, from the original msnbc article, depressing
2nhamann11yIs it depressing because someone with a Ph.D. in math is playing the lottery, or depressing because she must've have figured out something we don't know, given that she's won four times?
4mchouza11yIt's also far more likely that she cheated. Or that there is a conspiracy in the Lottery to make she win four times.
4Tyrrell_McAllister11yThe most eyebrow-raising part of that article:
[-][anonymous]11y 9

I noticed there is another deleted comment by EY where he explicitly writes:

"...the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us." [Jul 24, 2010 8:31 AM]

3wedrifid11yI stand corrected.

As a suggestion, the phrase "my superior understanding" will rarely be well received, even in a hypothetical.

Do you ever feel angry or irritated when you post? Are you aware of when posts (yours as well as other people's) sound rude or hostile?

I've gone back and edited again. I'd forgotten how much weight you put on that detail; it seemed strange to me at the time, but since it was part of the original feedback loop, I guess I can see why now - you'd already had lots of time to get angry over that particular detail, and I went and rounded it off to something different.

Silas, not having corrected that really wasn't intended as a slight, although I see now that you took it that way. At the time, I interpreted it you as trying to deflect the conversation into (what I saw as) a minor detail. (And yes, you did try to tell me it wasn't minor; I brushed it off as motivated cognition, without properly considering the ramifications).

6HughRistik11yThis is an example of what I meant when I say that many intellectual issues aren't as obvious as stoplights.

It's just that some people seem to be so cold and calculating that I'm left wondering if there's any empathic similarity at all

I trust you're aware that this is an ironic complaint coming from someone who attributes his social problems to an autism-spectrum disorder?

In any event, I follow discussions like this with morbid fascination, because I sympathize and empathize with both you and your interlocutors. I think you make some really good points that need to be heard, but at the same time I completely understand the criticisms of your tone and manner. ... (read more)

So my brother was watching Bullshit, and saw an exorcist claim that whenever a kid mentions having an invisible friend, they (the exorcist) tell the kid that the friend is a demon that needs exorcising.

Now, being a professional exorcist does not give a high prior for rationality.

But still, even given that background, that's a really uncritically stupid thing to say. And it occurred to me that in general, humans say some really uncritically stupid things to children.

I wonder if this uncriticality has anything to do with, well, not expecting to be criticized... (read more)

4LucasSloan11yProbably not very, because we can't actually imagine what that hypothetical person would say to us. It'd probably end up used as a way to affirm your positions by only testing strong points.
3jimmy11yI do it sometimes, and I think it helps.

It's interesting that the market drives the odds so close to reality, but doesn't quite close the gap. Do you know if there are regulations that keep some rogue casino from selling roulette bets as though the odds were 1/37, instead of 1/36?

This really doesn't have much to do with the market. While I don't know the details of gambling laws in all the US states and Indian nations, I would be very surprised if there were regulations on roulette odds. Many casinos have roulette wheels with only one 0 (paid as if 1/36, actual odds 1/37), and with other casi... (read more)

An akrasia fighting tool via Hacker News via Scientific American based on this paper. Read the Scientific American article for the short version. My super-short summary is that in self-talk asking "will I?" rather than telling yourself "I will" can be more effective at reaching success in goal-directed behavior. Looks like a useful tool to me.

2Vladimir_Golovin11yThis implies that the mantra "Will I become a syndicated cartoonist?" could be more effective than the original affirmative version, "I will become a syndicated cartoonist" [http://lesswrong.com/lw/eg/what_i_tell_you_three_times_is_true/].

What's the deal with programming, as a careeer? It seems like the lower levels at least should be readily accessible even to people of thoroughly average intelligence but I've read a lot that leads me to believe the average professional programmer is borderline incompetent.

E.g., Fizzbuzz. Apparently most people who come into an interview won't be able to do it. Now, I can't code or anything but computers do only and exactly what you tell them (assuming you're not dealing with a thicket of code so dense it has emergent properties) but here's what I'd tell t... (read more)

I have no numbers for this, but the idea is that after interviewing for a job, competent people get hired, while incompetent people do not. These incompetents then have to interview for other jobs, so they will be seen more often, and complained about a lot. So perhaps the perceived prevalence of incompetent programmers is a result of availability bias (?).

This theory does not explain why this problem occurs in programming but not in other fields. I don't even know whether that is true. Maybe the situation is the same elsewhere, and I am biased here because I am a programmer.

8Emile11yJoel Spolsky gave a similar explanation [http://www.joelonsoftware.com/items/2005/01/27.html]. Makes sense. I'm a programmer, and haven't noticed that many horribly incompetent programmers (which could count as evidence that I'm one myself!).
3sketerpot11yDo you consider fizzbuzz trivial? Could you write an interpreter for a simple Forth-like language, if you wanted to? If the answers to these questions are "yes", then that's strong evidence that you're not a horribly incompetent programmer. Is this reassuring?
2Emile11yYes Probably; I made a simple lambda-calculus interpret once and started working on a Lisp parser (I don't think I got much further than the 'parsing' bit). Forth looks relatively simple, though correctly parsing quotes and comments is always a bit tricky. Of course, I don't think I'm a horribly incompetent programmer -- like most humans, I have a high opinion of myself :D
9wedrifid11yFrom what I can tell the average person is borderline incompetent when it comes to the 'actually getting work done' part of a job. It is perhaps slightly more obvious with a role such as programming where output is somewhat closer to the level of objective physical reality.
7JRMayne11yI don't know anything about FizzBuzz, but your program generates no buzzes and lots of fizzes (appending fizz to numbers associated only with fizz or buzz.) This is not a particularly compelling demonstration of your point that it should be easy. (I'm not a programmer, at least not professionally. The last serious program I wrote was 23 years ago in Fortran.)
6sketerpot11yThe bug would have been obvious if the pseudocode had been indented. I'm convinced that a large fraction of beginner programming bugs arise from poor code formatting. (I got this idea from watching beginners make mistakes, over and over again, which would have been obvious if they had heeded my dire warnings and just frickin' indented their code.) Actually, maybe this is a sign of a bigger conceptual problem: a lot of people see programs as sequences of instructions, rather than a tree structure. Indentation seems natural if you hold the latter view, and pointless if you can only perceive programs as serial streams of tokens.
6Douglas_Knight11yThis seems to predict that python solves this problem. Do you have any experience watching beginners with python? (Your second paragraph suggests that indentation is just the symptom and python won't help.)
7cousin_it11yYour general point is right. Ever since I started programming, it always felt like money for free. As long as you have the right mindset and never let yourself get intimidated. Your solution to FizzBuzz is too complex and uses data structures ("associate whatever with whatever", then ordered lists) that it could've done without. Instead, do this: for x in range(1, 101): fizz = (x%3 == 0) buzz = (x%5 == 0) if fizz and buzz: print "FizzBuzz" elif fizz: print "Fizz" elif buzz: print "Buzz" else: print x This is runnable Python code. (NB: to write code in comments, indent each line by four spaces.) Python a simple language, maybe the best for beginners among all mainstream languages. Download the interpreter [http://python.org/] and use it to solve some Project Euler problems [http://projecteuler.net/index.php?section=problems] for finger exercises, because most actual programming tasks are a wee bit harder than FizzBuzz.
2SilasBarta11yHow did you first find work? How do you usually find work, and what would you recommend competent programmers do to get started in a career?
8jimrandomh11yThe least-effort strategy, and the one I used for my current job, is to talk to recruiting firms. They have access to job openings that are not announced publically, and they have strong financial incentives to get you hired. The usual structure, at least for those I've worked with, is that the prospective employee pays nothing, while the employer pays some fraction of a year's salary for a successful hire, where success is defined by lasting longer than some duration. (I've been involved in hiring at the company I work for, and most of the candidates fail the first interview on a question of comparable difficulty to fizzbuzz. I think the problem is that there are some unteachable intrinsic talents necessary for programming, and many people irrevocably commit to getting comp sci degrees before discovering that they can't be taught to program.)
4Vladimir_Nesov11yI think there are failure modes from the curiosity-stopping anti-epistemology cluster, that allow you to fail to learn indefinitely, because you don't recognize what you need to learn, and so never manage to actually learn that. With right approach anyone who is not seriously stupid could be taught (but it might take lots of time and effort, so often not worth it).
6cousin_it11yMy first paying job was webmaster for a Quake clan that was administered by some friends of my parents. I was something like 14 or 15 then, and never stopped working since (I'm 27 now). Many people around me are aware of my skills, so work usually comes to me; I had about 20 employers (taking different positions on the spectrum from client to full-time employer) but I don't think I ever got hired the "traditional" way with a resume and an interview. Right now my primary job is a fun project we started some years ago with my classmates from school, and it's grown quite a bit since then. My immediate boss is a former classmate of mine, and our CEO is the father of another of my classmates; moreover, I've known him since I was 12 or so when he went on hiking trips with us. In the past I've worked for friends of my parents, friends of my friends, friends of my own, people who rented a room at one of my schools, people who found me on the Internet, people I knew from previous jobs... Basically, if you need something done yesterday and your previous contractor was stupid, contact me and I'll try to help :-) ETA: I just noticed that I didn't answer your last question. Not sure what to recommend to competent programmers because I've never needed to ask others for recomendations of this sort (hah, that pattern again). Maybe it's about networking: back when I had a steady girlfriend, I spent about three years supporting our "family" alone by random freelance work, so naturally I learned to present a good face to people. Maybe it's about location: Moscow has a chronic shortage of programmers, and I never stop searching for talented junior people myself.
4Blueberry11yI was very surprised by this until I read the word "Moscow."
5gwern11y--"Epigrams in Programming" [http://www.cs.yale.edu/quotes.html], by Alan J. Perlis; ACM's SIGPLAN publication, September, 1982
5Daniel_Burfoot11yProgramming as a field exhibits a weird bimodal distribution of talent. Some people are just in it for the paycheck, but others think of it as a true intellectual and creative challenge. Not only does the latter group spend extra hours perfecting their art, they also tend to be higher-IQ. Most of them could make better money in the law/medicine/MBA path. So obviously the "programming is an art" group is going to have a low opinion of the "programming is a paycheck" group.
3gwern11yDo we have any refs for this? I know there's "The Camel Has Two Humps" [http://www.codinghorror.com/blog/2006/07/separating-programming-sheep-from-non-programming-goats.html] (Alan Kay on it [http://secretgeek.net/camel_kay.asp], the PDF [http://www.eis.mdx.ac.uk/research/PhDArea/saeed/paper1.pdf]), but anything else?
2Sniffnoy11yAnd going by his other papers [http://www.eis.mdx.ac.uk/research/PhDArea/saeed/] , though, it looks like the effect isn't nearly so strong as was originally claimed. (Though that's wrt whether his "consistency test" works, didn't check about whether bimodalness still holds.)
2Blueberry11yFixed that for you. :) (I'm a current law student.)
4Morendil11yI'll second the suggestion that you try your hand at some actual programming tasks, relatively easy ones to start with, and see where that gets you. The deal with programming is that some people grok it readily and some don't. There seems to be some measure of talent involved that conscientious hard word can't replace. Still, it seems to me (I have had a post about this in the works for ages) that anyone keen on improving their thinking can benefit from giving programming a try. It's like math in that respect.
4MartinB11yi think you overestimate human curiosity for one. Not everyone implements prime searching or Conways game of life for fun. For two. Even those that implement their own fun projects are not necessarily great programmers. It seems there are those that get pointers, and the others. For tree, where does a company advertise? There is a lot of mass mailing going on by not competent folks. I recently read Joel Spolskys book on how to hire great talent, and he makes the point that the really great programmers just never appear on the market anyway. http://abstrusegoose.com/strips/ars_longa_vita_brevis.PNG [http://abstrusegoose.com/strips/ars_longa_vita_brevis.PNG]
2sketerpot11yAre there really people who don't get pointers? I'm having a hard time even imagining this. Pointers really aren't that hard, if you take a few hours to learn what they do and how they're used. Alternately, is my reaction a sign that there really is a profoundly bimodal distribution of programming aptitudes?
6wedrifid11yThere really are people who would not take that few hours.
4cata11yI don't know if this counts, but when I was about 9 or 10 and learning C (my first exposure to programming) I understood input/output, loops, functions, variables, but I really didn't get pointers. I distinctly remember my dad trying to explain the relationship between the * and & operators with box-and-pointer diagrams and I just absolutely could not figure out what was going on. I don't know whether it was the notation or the concept that eluded me. I sort of gave up on it and stopped programming C for a while, but a few years later (after some Common Lisp in between), when I revisited C and C++ in high school programming classes, it seemed completely trivial. So there might be some kind of level of abstract-thinking-ability which is a prerequisite to understanding such things. No comment on whether everyone can develop it eventually or not.
3apophenia11yThere are really people who don't get pointers.
2Morendil11yOne of the epiphanies of my programming career was when I grokked function pointers. For a while prior to that I really struggled to even make sense of that idea, but when it clicked it was beautiful. (By analogy I can sort of understand what it might be like not to understand pointers themselves.) Then I hit on the idea of embedding a function pointer in a data structure, so that I could change the function pointed to depending on some environmental parameters. Usually, of course, the first parameter of that function was the data structure itself...

I don't expect him to delete it. However, I don't expect the threat made in the comment to be among the reasons he does not delete it.

5waitingforgodel11yAhh, okay good. LW & EY are awesome -- as I mentioned in the rest of this thread, I don't want to change any more than the smallest bit necessary to avoid future censorship. -wfg

Depends. Do you count honking at me when I don't realize a light has changed? That certainly "worked". I'd do the same to others, and I expect them to do the same to me.

Many people don't see the intellectual issues to be as clear-cut as traffic lights, especially in complex discussions. If someone honks at you on the road and it isn't obvious that you are making a driving error, it's going to get your back up, right? Same thing in intellectual discussions. Even if it should be obvious to someone that they are making an intellectual error, it's not always the right move to honk at them immediately.

I did regard them as loaded questions. That characteristic is orthogonal to their accuracy.

But even if I didn't, your parenthetical has not helped you. It was an attempt to score points, and those points will not make you stronger. That dig at Rain has not helped you communicate or explain anything other than your disdain; you are not trying to solve the problem.

I'm stepping away from this conversation so I don't do anything drastic (that I haven't already deleted). Suffice to say you've succeeded once again in creating an enormous, out of proportion negative emotional response in someone who was genuinely trying to be objective and help you.

I know very few people as effective as you are at generating anger and frustration with the written word.

Compare this

The last one is why I asked you if you were proud of what you did, and why I'm perplexed you took it as a loaded question.

With this

I would most certainly not be proud of having failed to impart my superior understanding to a member of my community who was suffering as a result of lacking that understanding.

This seems like you were going to make a negative judgement if he claimed pride. Thus the question does seem loaded. Pride is one of those thing that it is not a good thing to be fairly often.

It might well be Rain thinks that you had ... (read more)

http://www.slate.com/blogs/blogs/thewrongstuff/archive/2010/06/28/risky-business-james-bagian-nasa-astronaut-turned-patient-safety-expert-on-being-wrong.aspx

This article is pretty cool, since it describes someone running quality control on a hospital from an engineering perspective. He seems to have a good understanding of how stuff works, and it reads like something one might see on lesswrong.

Is there any philosophy worth reading?

As far as I can tell, a great deal of "philosophy" (basically the intellectuals' wastebasket taxon) consists of wordplay, apologetics, or outright nonsense. Consequently, for any given philosophical work, my prior strongly favors not reading it because the expected benefit won't outweigh the cost. It takes a great deal of evidence to tip the balance.

For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence... (read more)

So my question is: What philosophical works and authors have you found especially valuable, for whatever reason?

You might find it more helpful to come at the matter from a topic-centric direction, instead of an author-centric direction. Are there topics that interest you, but which seem to be discussed mostly by philosophers? If so, which community of philosophers looks like it is exploring (or has explored) the most productive avenues for understanding that topic?

Remember that philosophers, like everyone else, lived before the idea of motivated cognition was fully developed; it was commonplace to have theories of epistemology which didn't lead you to be suspicious enough of your own conclusions. You may be holding them to too high a standard by pointing to some of their conclusions, when some of their intermediate ideas and methods are still of interest and value today.

However, you should be selective of who you read. Unless you're an academic philosopher, for instance, reading a modern synopsis of Kantian thought is vastly preferable to trying to read Kant yourself. For similar reasons, I've steered clear of Hegel's original texts.

Unfortunately for the present purpose, I myself went the long way (I went to a college with a strong Great Books core in several subjects), so I don't have a good digest to recommend. Anyone else have one?

8Vladimir_M11yYoreth: That's an extremely bad way to draw conclusions. If you were living 300 years ago, you could have similarly heard that some English dude named Isaac Newton is spending enormous amounts of time scribbling obsessive speculations about Biblical apocalypse and other occult subjects [http://en.wikipedia.org/wiki/Isaac_Newton's_occult_studies] -- and concluded that even if he had some valid insights about physics, it wouldn't be worth your time to go looking for them.
3Emile11yThe value of Newton's theories themselves can quite easily be checked, independently of the quality of his epistemology. For a philosopher like Hegel, it's much harder to dissociate the different bits of what he wrote, and if one part looks rotten, there's no obvious place to cut. (What's more, Newton's obsession with alchemy would discourage me from reading whatever Newton had to say about science in general)
2wedrifid11yA bad way to draw conclusions. A good way to make significant updates based on inference.
4JoshuaZ11yLaktatos, Quine and Kuhn are all worth reading. Recommended works from each follows: Lakatos: " Proofs and Refutations" Quine: "Two Dogmas of Empiricism" Kuhn: "The Copernican Revolution" and "The Structure of Scientific Revolution" All of these have things which are wrong but they make arguments that need to be grappled with and understood (Copernican Revolution is more of a history book than a philosophy book but it helps present a case of Kuhn's approach to the history and philosophy of science in great detail). Kuhn is a particularly interesting case- I think that his general thesis about how science operates and what science is is wrong, but he makes a strong enough case such that I find weaker versions of his claims to be highly plausible. Kuhn also is just an excellent writer full of interesting factual tidbits. This seems like in general not a great attitude. The Descartes case is especially relevant in that Descartes did a lot of stuff not just philosophy. And some of his philosophy is worth understanding simply due to the fact that later authors react to him and discuss things in his context. And although he's often wrong, he's often wrong in a very precise fashion. His dualism is much more well-defined than people before him. Hegel however is a complete muddle. I'd label a lot of Hegel as not even wrong. ETA: And if I'm going to be bashing Hegel a bit, what kind of arrogant individual does it take to write a book entitled "The Encyclopedia of the Philosophical Sciences" that is just one's magnum opus about one's own philosophical views and doesn't discuss any others?
4mindviews11yYes. I agree with your criticisms - "philosophy" in academia seems to be essentially professional arguing, but there are plenty of well-reasoned and useful ideas that come of it, too. There is a lot of non-rational work out there (i.e. lots of valid arguments based on irrational premises) but since you're asking the question in this forum I am assuming you're looking for something of use/interest to a rationalist. I've developed quite a respect for Hilary Putnam [http://en.wikipedia.org/wiki/Hilary_Putnam] and have read many of his books. Much of his work covers philosophy of the mind with a strong eye towards computational theories of the mind. Beyond just his insights, my respect also stems from his intellectual honesty. In the Introduction to "Representation and Reality" he takes a moment to note, "I am, thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced." In short, as a rationalist I find reading his work very worthwhile. I also liked "Objectivity: The Obligations of Impersonal Reason" by Nicholas Rescher quite a lot, but that's probably partly colored by having already come to similar conclusions going in. PS - There was this thread [http://news.ycombinator.com/item?id=1503137] over at Hacker News that just came up yesterday if you're looking to cast a wider net.
2Larks11yI've always been told that Hegel basically affixed the section about Prussia due to political pressures, and that modern philosophers totally ignore it. Having said that, I wouldn’t read Hegel. I recommend avoiding reading original texts, and instead reading modern commentaries and compilations. 'Contemporary Readings in Epistemology' was the favoured first-year text at Oxford. Bertrand Russell's "History of Western Philosophy" is quite a good read too. The Stanford Encyclopaedia of Philosophy [http://plato.stanford.edu/entries/knowledge-analysis/] is also very good.
2Emile11yI've enjoyed Nietzsche, he's an entertaining and thought-provoking writer. He offers some interesting perspectives on morality, history, etc.
2wedrifid11yNone that actively affiliate themselves with the label 'philosophy'.

More on the coming economic crisis for young people, and let me say, wow, just wow: the essay is a much more rigorous exposition of the things I talked about in my rant.

In particular, the author had similar problems to me in getting a mortgage, such as how I get told on one side, "you have a great credit score and qualify for a good rate!" and on another, "but you're not good enough for a loan". And he didn't even make the mistake of not getting a credit card early on!

Plus, he gives a lot of information from his personal experience.

Be ... (read more)

6NancyLebovitz11yI've seen discussion of Goodhart's Law + Conservation of Thought playing out nastily in investment. For example, junk bonds started out as finding some undervalued bonds among junk bonds. Fine, that's how the market is supposed to work. Then people jumped to the conclusion that everything which was called a junk bond was undervalued. Oops.

I have a question about prediction markets. I expect that it has a standard answer.

It seems like the existence of casinos presents a kind of problem for prediction markets. Casinos are a sort of prediction market where people go to try to cash out on their ability to predict which card will be drawn, or where the ball will land on a roulette wheel. They are enticed to bet when the casino sets the odds at certain levels. But casinos reliably make money, so people are reliably wrong when they try to make these predictions.

Casinos don't invalidate prediction markets, but casinos do seem to show that prediction markets will be predictably inefficient in some way. How is this fact dealt with in futarchy proposals?

7Unnamed11yOne way to think of it is that decisions to gamble are based on both information and an error term which reflects things like irrationality or just the fact that people enjoy gambling. Prediction markets are designed to get rid of the error and have prices reflect the information: errors cancel out as people who err in opposite directions bet on opposite sides, and errors in one direction create +EV opportunities which attract savvy, informed gamblers to bet on the other side. But casinos are designed to drive gambling based solely on the error term - people are betting on events that are inherently unpredictable (so they have little or no useful information) against the house at fixed prices, not against each other (so the errors don't cancel out), and the prices are set so that bets are -EV for everyone regardless of how many errors other people make (so there aren't incentives for savvy informed people to come wager). Sports gambling is structured more similarly to prediction markets - people can bet on both sides, and it's possible for a smart gambler to have relevant information and to profit from it, if the lines aren't set properly - and sports betting lines tend to be pretty accurate.
9Strange711yI have also heard of at least one professional gambler who makes his living by identifying and confronting other peoples' superstitious gambling strategies. For example, if someone claims that 30 hasn't come up in a while, and thus is 'due,' he would make a separate bet with them (to which the house is not a party), claiming simply that they're wrong. Often, this is an even-money bet which he has upwards of a 97% chance of winning; when he loses, the relatively small payoff to the other party is supplemented by both the warm fuzzies associated with rampant confirmation bias, and the status kick from defeating a professional gambler in single combat.
5Vladimir_Nesov11yThe money brought in by stupid gamblers creates additional incentive for smart players to clear it out with correct predictions. The crazier the prediction market, the more reason for rational players to make it rational.
5Tyrrell_McAllister11yRight. Maybe I shouldn't have said that a prediction market would be "predictably inefficient". I can see that rational players can swoop in and profit from irrational players. But that's not what I was trying to get at with "predictably inefficient". What I meant was this: Suppose that you know next to nothing about the construction of roulette wheels. You have no "expert knowledge" about whether a particular roulette ball will land in a particular spot. However, for some reason, you want to make an accurate prediction. So you decide to treat the casino (or, better, all casinos taken together) as a prediction market, and to use the odds at which people buy roulette bets to determine your prediction about whether the ball will land in that spot. Won't you be consistently wrong if you try that strategy? If so, how Is this consistent wrongness accounted for in futarchy theory? I understand that, in a casino, players are making bets with the house, not with each other. But no casino has a monopoly on roulette. Players can go to the casino that they think is offering the best odds. Wouldn't this make the gambling market enough like a prediction market for the issue I raise to be a problem? I may just have a very basic misunderstanding of how futarchy would work. I figured that it worked like this: The market settles on a certain probability that something will happen by settling on an equilibrium for the odds at which people are willing to buy bets. Then policy makers look at the market's settled probability and craft their policy accordingly.
5Unnamed11yRoulette odds are actually very close to representing probabilities, although you'd consistently overestimate the probability if you just translated directly. Each $1 bet on a specific number pays out a $35 profit, suggesting p=1/36, but in reality p=1/38. Relative odds get you even closer to accurate probabilities; for instance, 7 & 32 have the same payout, from which we could conclude (correctly, in this case) that they are equally likely. With a little reasoning - 38 possible outcomes with identical payouts - you can find the correct probability of 1/38. This table [http://en.wikipedia.org/wiki/Roulette#Bet_odds_table_.28American_roulette.29] shows that every possible roulette bet except for one has the same EV, which means that you'd only be wrong about relative probabilities if you were considering that one particular bet. Other casino games have more variability in EV, but you'd still usually get pretty close to correct probabilities. The biggest errors would probably be for low probability-high payout games like lotteries or raffles.
4orthonormal11yIn the stock market, as in a prediction market, the smart money is what actually sets the price, taking others' irrationalities as their profit margin. There's no such mechanism in casinos, since the "smart money" doesn't gamble in casinos for profit (excepting card-counting, cheating, and poker tournaments hosted by casinos, etc).
3orthonormal11yThe most obvious thing: customers are only allowed to take one side of a bet, whose terms are dictated by the house. If you had a general-topic prediction market with one agent who chose the odds for everything, and only allowed people to bet in one chosen direction on each topic, that agent (if they were at all clever) could make a lot of money, but the odds wouldn't be any "smarter" than that agent (and in fact would be dumber so as to make a profit margin).
3Dagon11yCasinos have an assymetry: creation of new casinos is heavily regulated, so there's no way for people with good information to bet on their beliefs, and no mechanism for the true odds to be reached as the market price for a wager.
2orthonormal11yNormally I wouldn't comment on a typo, but I can't read "assymetry" without chuckling.

I think you're overestimating your ability to see what exactly is wrong and how to fix it. Humans (westerners?) are biased towards thinking that improvements they propose would indeed make things better. This tendancy is particularly visible in politics, where it causes the most damage.

More generally, humans are probably biased towards thinking their own ideas are particularly good, hence the "not invented here" syndrome, etc. Outside of politics, the level of confidence rarely reaches the level of threatening death and destruction if one's ideas are not accepted.

Do you believe you have a tone problem?

Is there a bias, maybe called the 'compensation bias', that causes one to think that any person with many obvious positive traits or circumstances (really attractive, rich, intelligent, seemingly happy, et cetera) must have at least one huge compensating flaw or a tragic history or something? I looked through Wiki's list of cognitive biases and didn't see it, but I thought I'd heard of something like this. Maybe it's not a real bias?

If not, I'd be surprised. Whenever I talk to my non-rationalist friends about how amazing persons X Y or Z are, they invariab... (read more)

4[anonymous]11yIt may have to do with the manner you bring it up - it's not hard to see how saying something like "X is amazing" could be interpreted "X is amazing...and you're not" (after all, how often do you tell your friends how amazing they are?), in which case the bias is some combination of status jockeying, cognitive dissonance and ego protection.
3Will_Newsome11yWow, that's seems like a very likely hypothesis that I completely missed. Is there some piece of knowledge you came in with or heuristic you used that I could have used to think up your hypothesis?
2[anonymous]11yI've spent some time thinking about this, and the best answer I can give is that I spend enough time thinking about the origins and motivations of my own behavior that, if it's something I might conceivably do right now, or (more importantly) at some point in the past, I can offer up a possible motivation behind it. Apparently this is becoming more and more subconscious, as it took quite a bit of thinking before I realized that that's what I had done.
2NancyLebovitz11yCould it be a matter of being excessively influenced by fiction? It's more convenient for stories if a character has some flaws and suffering.

Day-to-day question:

I live in a ground floor apartment with a sunken entryway. Behind my fairly large apartment building is a small wooded area including a pond and a park. During the spring and summer, oftentimes (~1 per 2 weeks) a frog will hop down the entryway at night and hop around on the dusty concrete until dying of dehydration. I occasionally notice them in the morning as I'm leaving for work, and have taken various actions depending on my feelings at the time and the circumstances of the moment.

  1. Action: Capture the frog and put it in the woods o
... (read more)
6Eliezer Yudkowsky11yI don't consider frogs to be objects of moral worth.

Questions of priority - and the relative intensity of suffering between members of different species - need to be distinguished from the question of whether other sentient beings have moral status at all. I guess that was what shocked me about Eliezer's bald assertion that frogs have no moral status. After all, humans may be less sentient than frogs compared to our posthuman successors. So it's unsettling to think that posthumans might give simple-minded humans the same level of moral consideration that Elizeer accords frogs.

-- David Pearce via Facebook

Are there any possible facts that would make you consider frogs objects of moral worth if you found out they were true?

(Edited for clarity.)

6Eliezer Yudkowsky11y"Frogs have subjective experience" is the biggy, there's a number of other things I already know myself to be confused about which impact on that, and so I don't know exactly what I should be looking for in the frog that would make me think it had a sense of its own existence. Certainly there are any number of news items I could receive about the frog's mental abilities, brain complexity, type of algorithmic processing, ability to reflect on its own thought processes, etcetera, which would make me think it was more likely that the frog was what a non-confused person of myself would regard as fulfilling the predicate I currently call "capable of experiencing pain", as opposed to being a more complicated version of neural network reinforcement-learning algorithms that I have no qualms about running on a computer. A simple example would be if frogs could recognize dots painted on them when seeing themselves in mirrors, or if frogs showed signs of being able to learn very simple grammar like "jump blue box". (If all human beings were being cryonically suspended I would start agitating for the chimpanzees.)
4DanielVarga11yI am very surprised that you suggest that "having subjective experience" is a yes/no thing. I thought it is consensus opinion here that it is not. I am not sure about others on LW, but I would even go three steps further: it is not even a strict ordering of things. It is not even a partial ordering of things. I believe it can be only defined in the context of an Observer and an Object, where Observer gives some amount of weight to the theory that Object's subjective experience is similar to Observer's own.
3Utilitarian11yI like the way you phrased your concern for "subjective experience" -- those are the types of characteristics I care about as well. But I'm curious: What does ability to learn simple grammar have to do with subjective experience?

I'm surprised. Do you mean you wouldn't trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs? If we plotted your attitudes to progressively more intelligent entities, where's the discontinuity or discontinuities?

5Vladimir_Nesov11yYou'd need to change that to 10^6 specks and 10^15 frogs or something, because emotional reaction to choosing to kill the frogs is also part of the consequences of the decision, and this particular consequence might have moral value that outweighs one speck. Your emotional reaction to a decision about human lives is irrelevant, the lives in question hold most of the moral worth, while with a decision to kill billions of cockroaches (to be safe from the question of moral worth of frogs), the lives of the cockroaches are irrelevant, while your emotional reaction holds most of moral worth.
3Utilitarian11yI'm not so sure [http://www.utilitarian-essays.com/insect-pain.html]. I'm no expert on the subject, but I suspect cockroaches may have moderately rich emotional lives.
2Bongo11yHopefully he still thinks there's a small probability of frogs being able to experience pain, so that the expected suffering of frog torture would be hugely greater than a dust speck.
2RichardKennaway11ySeconded, and how do you (Eliezer) rate other creatures on the Great Chain of Being?
2RichardKennaway11yWould you save a stranded frog, though?
4Nisan11y2: I would put the frog in the grass. Warm fuzzies are a great way to start the day, and it only costs 30 seconds. If you're truly concerned about the well-being of frogs, you might want to do more. You'd also want to ask yourself what you're doing to help frogs everywhere. The fact that the frog ended up on your doorstep doesn't make you extra responsible for the frog; it merely provides you with an opportunity to help. Also, wash your hands before eating.
8jimrandomh11yThe goal of helping frogs is to gain fuzzies, not utilons. Thinking about all the frogs that you don't have the opportunity to help would mean losing those fuzzies.
5Rain11yThere's no utility in saving (animal) life? Or is that only for this particular instance? Edit 20-Jun-2014: Frogs saved since my original post: 21.5. Frogs I've failed to save: 23.5.
3NancyLebovitz11yHow often do you find frogs in the stairwell? Could it make sense to carry something (a plastic bag?) to pick up the frog with so that you don't get slime on your hands? If it were me, I think I'd go with plastic bag or other hand cover, possibly have room temperature water with me (probably good enough for frogs, and I'm willing to drink the stuff), and put the frog on the lawn unless I'm in the mood for a bit of a walk and seeing the woods. I have no doubt that I would habitually wonder whether there are weird events in people's lives which are the result of interventions by incomprehensibly powerful beings.

How facts Backfire

Mankind may be crooked timber, as Kant put it, uniquely susceptible to ignorance and misinformation, but it’s an article of faith that knowledge is the best remedy. If people are furnished with the facts, they will be clearer thinkers and better citizens. If they are ignorant, facts will enlighten them. If they are mistaken, facts will set them straight.

In the end, truth will out. Won’t it?

Maybe not. Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of info

... (read more)
7Kaj_Sotala11yInteresting tidbit from the article: I have long been thinking that the openly aggressive approach some display in promoting atheism / political ideas / whatever seems counterproductive, and more likely to make the other people not listen than it is to make them listen. These results seem to support that, though there have also been contradictory reports from people saying that the very aggressiveness was what made them actually think.
6whpearson11yI'd guess aggression would have a polarising affect, depending upon ingroup or outgroup affiliation. Aggression from an member of your own group is directed at something important that you ought to take note of. Aggression from an outsider is possibly directed at you so something to be ignored (if not credible) or countered. We really need some students to do some tests upon, or a better way of searching psych research than google.
6MBlume11yData point: After years of having the correct arguments in my hand, having indeed generated many of them myself, and simply refusing to update, Eliezer, Cectic, and Dan Meissler ganged up on me and got the job done. I think Jesus and Mo helped too, now I think of it. That period's already getting murky in my head =/ Anyhow, point is, none of the above are what you'd call gentle. ETA: I really do think humor is incredibly corrosive to religion. Years before this, the closest I ever came to deconversion was right after I read "Kissing Hank's Ass"
4cupholder11yPresumably there's heterogeneity in people's reactions to aggressiveness and to soft approaches. Most likely a minority of people react better to aggressive approaches and most people react better to being fed opposing arguments in a sandwich with self-affirmation bread.
3twanvl11yI believe aggressive debates are not about convincing the people you are debating with, that is likely to be impossible. Instead it is about convincing third parties who have not yet made up their mind. For that purpose it might be better to take an overly extreme position and to attack your opponents as much as possible.
2Christian_Szegedy11yI think one of the reasons this self-esteem seeding works is that identifying your core values makes other issues look less important. On the other hand, if you e.g. independently expressed that God is an important element of your identity and belief in him is one of your treasured values, then it may backfire and you will be even harder to move you away from that. (Of course I am not sure: I have never seen any scientific data on that. This is purely a wild guess.)
2JoshuaZ11yThe primary study in question is here [http://www-personal.umich.edu/~bnyhan/nyhan-reifler.pdf]. I haven't been able to locate online a copy of the study about self-esteem and corrections.

Hah. I first wrote the example using xrange, then changed it to range to make it less confusing to someone who doesn't know Python :-)

Wouldn't the easiest solution be just to have Eliezer agree to have Roko's posts and comments restored (the ones that he voluntarily deleted)? My understanding is that Roko already agreed, and we're now just waiting on Eliezer's word. I don't see any reason why he wouldn't agree. Has anyone actually asked him directly?

Yes, I read the whole thread (and the banned doubleplus ungood post by Roko).

I wouldn't mind if you were putting "your money" on the table. What I mind is threatening to take act with the goal of reducing mankind's chances of survival. That's not "your money".

If you had just threatened to stop donating money to SIAI (do you donate?), no problemo. Whether that action has an impact on existential risk is unclear; my problem isn't doing actions that might have increase existential risk, it's doing actions whose purpose is to increase exist... (read more)

5kodos9611yWhat if somebody else had a similar button, but with 1 in 100,000 probability. Would it be ok to threaten to push your 1 in a million button if the other guy pressed his 1 in 100,000 button? If you had reason to believe that the other guy, for some reason, would take your threat seriously, but wasn't taking the threat of his own button seriously? OK, that got kind of convoluted, but do you see what I'm saying?
2Emile11yIf you were sufficiently certain that the situation is as you describe (least convenient world), yes, it would be OK to threaten (and carry out the threat). If however you obtained the information through a bug-ridden device that is known to be biased towards overconfidence in this kind of situations - then such threats would be immoral. And I think most imaginable real-world situations fall in the second category.
1kodos9611yI think the only thing we disagree on then is the word "immoral". I would say that it may very well be incorrect, but not immoral, so long as he is being sincere in his motivations. ETA: ok after thinking about it some more, I guess I could see how it might be considered immoral (in the least convenient world to the point of view I'm arguing). I guess it kind of depends on the specifics of what's going on inside his head, which I'm of course not privy to.
1Emile11yI'm not sure it would count as "immoral", guess it also depends of how you define the terms. I see this as a case of the more general "does the end justify the means?". In principle, the end do justify the means, if you're sufficiently confident that those means will indeed result in that end, and that the end is really valuable. In practice, the human brain is biased towards finding ends to justify means that just happen to bring power or prestige to the human in question. In fact, most people doing anything widely considered "bad" can come up with a good story about how it's kinda justified. So, part of what makes a human moral is willingness to correct this, to listen to the voice of doubt, or at least to consider that one may be wrong, especially when taking action that might harm others.

The more recent analysis I've read says that people pretty much become suicide bombers for nationalist reasons, not religious reasons.

I suppose that "There should not be American bases on the sacred soil of Saudi Arabia" is a hybrid of the two, and so might be "I wanted to kill because Muslims were being hurt"-- it's a matter of group identity more than "Allah wants it".

I don't have specifics for the 9/11 bombers.

I'd love if you'd send me a short PM explaining how I'm wrong. Thanks.

[-][anonymous]11y 6

Thanks, I often commit that mistake. I just write without thinking much, not recognizing the potential importance the given issues might bear. I guess the reason is that I mainly write for urge of feedback and to alleviate mental load.

It's not really meant as an excuse but rather the exposure of how one can use the same arguments to support a different purpose while criticizing others for these arguments and not their differing purpose. And the bottom line is that there will have to be a tradeoff between protecting values and the violation of the same to g... (read more)

You appear to be expressing disrespect. I do not find that appealing.

[-][anonymous]11y 6

Reading Michael Vassar's comments on WrongBot's article (http://lesswrong.com/lw/2i6/forager_anthropology/2c7s?c=1&context=1#2c7s) made me feel that the current technique of learning how to write a LW post isn't very efficient (read lots of LW, write a post, wait for lots of comments, try to figure out how their issues could be resolved, write another post etc - it uses up lots of the writer's time and lot's of the commentors time).

I was wondering whether there might be a more focused way of doing this. Ie. A short term workshop, a few writers who hav... (read more)

3[anonymous]11yWe could use a more structured system, perhaps. At this point, there's nothing to stop you from writing a post before you're ready, except your own modesty. Raise the threshold, and nobody will have to yell at people for writing posts that don't quite work. Possibilities: 1. Significantly raise the minimum karma level. 2. An editorial system: a more "advanced" member has to read your post before it becomes top-level. 3. A wiki page about instructions for posting. It should include: a description of appropriate subject matter, formatting instructions, common errors in reasoning or etiquette. 4. A social norm that encourages editing (including totally reworking an essay.) The convention for blog posts on the internet in general mandates against editing -- a post is supposed to be an honest record of one's thoughts at the time. But LessWrong is different, and we're supposed to be updating as we learn from each other. We could make "Please edit this" more explicit. A related thought on karma -- I have the suspicion that we upvote more than we downvote. It would be possible to adjust the site to keep track of each person's upvote/downvote stats. That is, some people are generous with karma, and some people give more negative feedback. We could calibrate ourselves better if we had a running tally.
5jimrandomh11yKuro5hin had an editorial system, where all posts started out in a special section where they were separate and only visible to logged in users. Commenters would label their comments as either "topical" or "editorial", and all editorial comments would be deleted when the post left editing; and votes cast during editing would determine where the post went (front page, less prominent section, or deleted). Unfortunately, most of the busy smart people only looked at the posts after editing, while the trolls and people with too much free time managed the edit queue, eventually destroying the quality of the site and driving the good users away. It might be possible to salvage that model somehow, though. We upvote much more than we downvote - just look at the mean comment and post scores. Also, the number of downvotes a user can make is capped at their karma.
4xamdam11yAnother technical solution. Not trivial to implement, but also contains significant side benefits. * Find some subset of sequences and other highly ranked posts that are "super-core" and has large consensus not just in karma, but also in agreement by high-karma members (say top ten). * Create a multiple choice test and implement it online, which is external technologies exist for already I am sure. Some karma + passing test gets top posting privileges. I have to confess I abused my newly acquired posting privileges and probably diluted the site's value with a couple of posts. Thank goodness they were rather short :). I took the hint though and took to participating in the comment discussion and reading sequences until I am ready to contribute at a higher level.
3WrongBot11yIs there any consensus about the "right" way to write a LW post? I see a lot of diversity in style, topic, and level of rigor in highly-voted posts. I certainly have no good way to tell if I'm doing it right; Michael Vassar doesn't think so, but he's never had a post voted as highly as my first one was. (Voting is not solely determined by post quality; this is a big part of the problem.) I would certainly love to have a better way to get feedback than the current mechanisms; it's indisputable that my writing could be better. Being able to workshop posts would be great, but I think it would be hard to find the right people to do the workshopping; off the top of my head I can really only think of a handful of posters I'd want to have doing that, and I get the impression that they're all too busy. Maybe not, though. (I think this is a great idea.)
2Larks11yI didn't think there was anything particularly wrong with your post, but newer posts get a much higher level of karma than old ones, which must be taken into account. Some of the core sequence posts have only 2 karma, for example.

Rationality applied to swimming

The author was a lousy swimmer for a long time, but got respect because he put in so much effort. Eventually he became a swim coach, and he quickly noticed that the bad swimmers looked the way he did, and the good swimmers looked very different, so he started teaching the bad swimmers to look like the good swimmers, and began becoming a better swimmer himself.

Later, he got into the physics of good swimming. For example, it's more important to minimize drag than to put out more effort.

I'm posting this partly because it's alway... (read more)

Thought without Language Discussion of adults who've grown up profoundly deaf without having been exposed to sign language or lip-reading.

Edited because I labeled the link as "Language without Thought-- this counts as an example of itself.

Two things of interest to Less Wrong:

First, there's an article about intelligence and religiosity. I don't have access to the papers in question right now, but the upshot is apparently that the correlation between intelligence (as measured by IQ and other tests) and irreligiosity can be explained with minimal emphasis on intelligence but rather on ability to process information and estimate your own knowledge base as well. They found for example that people who were overconfident about their knowledge level were much more likely to be religious. There may... (read more)

The selective attention test (YouTube video link) is quite well-known. If you haven't heard of it, watch it now.

Now try the sequel (another YouTube video).

Even when you're expecting the tbevyyn, you still miss other things. Attention doesn't help in noticing what you aren't looking for.

More here.

Is a FAI friendly towards superintelligent, highly conscious uFAI? No, it's not. It will kill it.

Are you sure? Random alternative possibilities:

  • Hack it and make it friendly
  • Assimilate it
  • Externally constrain its actions
  • Toss it into another universe where humanity doesn't exist

Unless you're one yourself, it's rather difficult to predict what other options a superintelligence might come up with, that you never even considered.

[-][anonymous]11y 5

What was the argument then? This thread suggests my point of view.

Here one of many comments from the thread above and elsewhere indicating that the deletion was due to the risk I mentioned:

I read the article, and it struck me as dangerous. [JoshuaZ 01 August 2010 04:46:39AM]

I've just read EY' comment. It's indeed mainly about protecting people from themselves causing unfriendly AI to blackmail them. This conclusion is hard to come by since it is deleted without explanation. Still, it's basically the same argument and quite a few people on LW seem to f... (read more)

If he said (and this is all public), "You know, one of my vices is that I left myself get fat. But I don't really care. I actually prefer my present lifestyle; I'm quite happy this way."

And then I said, "I used to be that way, but then I decided to lose weight and succeeded."

And then he said, "Meh. Whatever works for you, I guess."

"I know some tricks..." "Not interested."

And then a year later I said, "One difference between you and me is that when I was fat, I didn't decide to stay that way."

What would you think? Because that's what happened, once you carry the transformation through.

I have not seen the original post, but can't someone simply post it somewhere else? Is deleting it from here really a solution (assuming there's real danger)? BTW, I can't really see how a post on a board can be dangerous in a way implied here.

5AngryParsley11yThe likely explanation is that people who read the article agreed it was dangerous. If some of them had decided the censorship was unjustified, LW might look like Digg after the AACS key controversy [http://en.wikipedia.org/wiki/AACS_encryption_key_controversy#DMCA_notices_and_Digg] .

I read the article, and it struck me as dangerous. I'm going to be somewhat vague and say that the article caused two potential forms of danger, one unlikely but extremely bad if it occurs and the other less damaging but having evidence(in the article itself) that the damage type had occurred. FWIW, Roko seemed to agree after some discussion that spreading the idea did pose a real danger.

I've gone back and edited the comment in question, and I apologize for not having done so earlier. (And, while this doesn't really justify my not having edited it earlier - the reason I didn't edit it then was that I still hadn't fully understood what happened. Revisiting it now, I noticed what I missed the last time around - namely, a full enumeration of the people who could've prevented the situation from blowing up in the first place, including myself.) I'm not sure whether linking to or summarizing that thread would be a net positive or negative, so I won't, but you can if you think making my edit visible is worth the chance that it derails the current conversation here.

It's just that some people seem to be so cold and calculating that I'm left wondering if there's any empathic similarity at all

I think that's a consequence of distance. It's easier to be a jerk to someone, deliberately or accidentally, when they seem like just a username on a forum; it's harder to recognize that a conversation has gone awry when it's all text; and it's harder to back down and apologize when there are third parties watching.

In online conversations, the emotional palette for most people seems to be: detached, amused, or angry. All other e... (read more)

Here's something that's only a suggestion-- I think it's worked pretty well for me, but I don't have the same emotional habits you do.

Use punishment only as a last resort. It's a very inexact tool for getting what you want, or even for avoiding what you don't want.

The fact that you're angry isn't going to make punishment into a better tool.

When other people used punishment to get what they wanted from you, it didn't work for them, did it?

Turing Machines are purely classical entities. They are all equivalent, except for the data fed into them. If humans can be represented by a TM, then all humans are identical except for the data fed into the TM that is simulating them. Where is this wrong?

It's no more wrong than saying that all books are identical except for the differing number and arrangement of letters. It's also no more useful.

Has anyone been doing, or thinking of doing, a documentary (preferably feature-length and targeted at popular audiences) about existential risk? People seem to love things that tell them the world is about to end, whether it's worth believing or not (2012 prophecies, apocalyptic religion, etc., and on the more respectable side: climate change, and... anything else?), so it may be worthwhile to have a well-researched, rational, honest look at the things that are actually most likely to destroy us in the next century, while still being emotionally compelling... (read more)

2Kevin11ySure, I've been thinking about it, I need $10MM to produce it though.

Nobel Laureate Jean-Marie Lehn is a transhumanist.

We are still apes and are fighting all around the world. We are in the prisons of dogmatism, fundamentalism and religion. Let me say that clearly. We must learn to be rational ... The pace at which science has progressed has been too fast for human behaviour to adapt to it. As I said we are still apes. A part of our brain is still a paleo-brain and many of reactions come from our fight or flight instinct. As long as this part of the brain can take over control the rational part of the brain (we will face

... (read more)

May I suggest also that we be careful to distinguish cold fusion from fusion in general? Cold fusion is extremely unlikely. Hot fusion reactors whether laser confinement or magnetic confinement already exist, the only issue is getting them to produce more useful energy than you put in. This is very different than cold fusion where the scientific consensus is that there's nothing fusing.

"Therefore, “Hostile Wife Phenomenon” is actually “Distant, socially dysfunctional Husband Syndrome” which manifests frequently among cryonics proponents. As a coping mechanism, they project (!) their disorder onto their wives and women in general to justify their continued obsessions and emotional failings."

Assorted hilarious anti-cryonics comments on the accelerating future thread

7nhamann11yIf anyone is interested in seeing comments that are more representative of a mainstream response than what can be found from an Accelerating Future thread, Metafilter [http://www.metafilter.com/] recently had a post [http://www.metafilter.com/93644/Cryonics-and-marriage] on the NY Times article. The comments aren't hilarious and insane, they're more of a casually dismissive nature. In this thread, cryonics is called an "afterlife scam" [http://www.metafilter.com/93644/Cryonics-and-marriage#3178529], a pseudoscience [http://www.metafilter.com/93644/Cryonics-and-marriage#3178588], science fiction [http://www.metafilter.com/93644/Cryonics-and-marriage#3178693] (technically true at this stage, but there's definitely an implied negative connotation on the "fiction" part, as if you shouldn't invest in cryonics because it's just nerd fantasy), and Pascal's Wager for atheists [http://www.metafilter.com/93644/Cryonics-and-marriage#3178563] (The comparison is fallacious, and I thought the original Pascal's Wager was for atheists anyways...). There are a few criticisms that it's selfish [http://www.metafilter.com/93644/Cryonics-and-marriage#3178568], more than a few jokes sprinkled throughout the thread (as if the whole idea is silly), and even your classic death apologist [http://www.metafilter.com/93644/Cryonics-and-marriage#3178542]. All in all, a delightful cornucopia of irrationality. ETA: I should probably point out that there were a few defenses. The most highly received defense of cryonics appears to be this post [http://www.metafilter.com/93644/Cryonics-and-marriage#3178521]. There was also a comment from someone registered with Alcor [http://www.metafilter.com/93644/Cryonics-and-marriage#3178769] that was very good, I thought. I attempted a couple of rebuttals, but I don't think they were well-received. Also, check out this hilarious description of Robin Hanson [http://www.metafilter.com/93644/Cryonics-and-marriage#3178821] from a commenter there: I guess that
8RobinZ11yThe responses are interesting. I think this is the most helpful to my understanding: I think this is the biggest PR hurdle for cryonics: it resembles (superficially) a transparent scam selling the hope of immortality for thousands of dollars.
8[anonymous]11yum... why isn't it? There's a logically possible chance of revival someday, yeah. But with no way to estimate how likely it is, you're blowing money on mere possibility. We don't normally make bets that depend on the future development of currently unknown technologies. We aren't all investing in cold fusion just because it would be really awesome if it panned out. Sorry, I know this is a cryonics-friendly site, but somebody's got to say it.
6Christian_Szegedy11yThere are a lot of alternatives to fusion energy and since energy production is a widely recognized societal issue, making individual bets on that is not an immediate matter of life and death on a personal level. I agree with you, though, that a sufficiently high probability estimate on the workability of cryonics is necessary to rationally spend money on it. However, if you give 1% chance for both fusion and cryonics to work, it could still make sense to bet on the latter but not on the first.
5lsparrish11yThat's ok, it's a skepticism friendly site as well. I don't see a mechanism whereby I get a benefit within my lifetime by investing in cold fusion, in the off chance that it is eventually invented and implemented.
4EStokes11yThere's always a way to estimate how likely something is, even if it's not a very accurate prediction. And mere used like seems kinda like a dark side word, if you'll excuse me. Cryonics is theoretically possible, in that it isn't inconsistant with science/physics as we know it so far. I can't really delve into this part much, as I don't know anything about cold fusion and thus can't understand the comparison properly, but it sounds as if it might be inconsistant with physics? Possibly relevant: Is Molecular Nanotechnology Scientific? [http://lesswrong.com/lw/io/is_molecular_nanotechnology_scientific/] Also, the benefits of cryonics working if you invested in it would be greater than those of investing in cold fusion. And this is just the impression I get, but it sounds like you're being a contrarian contrarian. I think it's your last sentence: it made me think of Lonely Dissent [http://lesswrong.com/lw/mb/lonely_dissent/].
5[anonymous]11yThe unfair thing is, the more a community (like LW) values critical thinking, the more we feel free to criticize it. You get a much nicer reception criticizing a cryonicist's reasoning than criticizing a religious person's. It's easy to criticize people who tell you they don't mind. The result is that it's those who need constructive criticism the most who get the least. I'll admit I fall into this trap sometimes.
4RobinZ11yWell, right off the bat, there's a difference [http://lesswrong.com/lw/4h/when_truth_isnt_enough/] between "cryonics is a scam" and "cryonics is a dud investment". I think there's sufficient evidence to establish the presence of good intentions - the more difficult question is whether there's good evidence that resuscitation will become feasible.
3jimrandomh11yYou seem to be under the assumption that there is some minimum amount of evidence needed to give a probability. This is very common, but it is not the case. It's just as valid to say that the probability that an unknown statement X about which nothing is known is true is 0.5, as it is to say that the probability that a particular well-tested fair coin will come up heads is 0.5. Probabilities based on lots of evidence are better than probabilities based on little evidence, of course; and in particular, probabilities based on little evidence can't be too close to 0 or 1. But not having enough evidence doesn't excuse you from having to estimate the probability of something before accepting or rejecting it.

I'm not disputing your point vs cryonics, but 0.5 will only rarely be the best possible estimate for the probability of X. It's not possible to think about a statement about which literally nothing is known (in the sense of information potentially available to you). At the very least you either know how you became aware of X or that X suddenly came to your mind without any apparent reason. If you can understand X you will know how complex X is. If you don't you will at least know that and can guess at the complexity based on the information density you expect for such a statement and its length.

Example: If you hear someone whom you don't specifically suspect to have a reason to make it up say that Joachim Korchinsky will marry Abigail Medeiros on August 24 that statement probably should be assigned a probability quite a bit higher than 0.5 even if you don't know anything about the people involved. If you generate the same statement yourself by picking names and a date at random you probably should assign a probability very close to 0.

Basically it comes down to this: Most possible positive statements that carry more than one bit of information are false, but most methods of encountering statements are biased towards true statements.

2Will_Newsome11yI wonder what the average probability of truth is for every spoken statement made by the human populace on your average day, for various message lengths. Anybody wanna try some Fermi calculations? I'm guessing it's rather high, as most statements are trivial observations about sensory data, performative utterances, or first-glance approximations of one's preferences. I would also predict sentence accuracy drops off extremely quickly the more words the sentence has, and especially so the more syllables there are per word in that sentence.
3FAWS11yOnce you are beyond the most elementary of statements I really don't think so, rather the opposite, at least for unique rather than for repeated statements. Most untrue statements are probably either ad hoc lies ("You look great." "That's a great gift." "I don't have any money with me.") or misremembered information. In the case of of ad hoc lies there is not enough time to invent plausible details and inventing details without time to think it through increases the risk of being caught, in the case of misremembered information you are less likely to know or remember additional information you could include in the statement than someone who really knows the subject and wouldn't make that error. Of course more information simply means including more things even the best experts on the subject are simply wrong about as well as more room for misrememberings, but I think the first effect dominates because there are many subjects the second effect doesn't really apply to, e. g. the content of a work of fiction or the constitution of a state (to an extent even legal matters in general). Complex untrue statements would be things like rehearsed lies and anecdotes/myths/urban legends. Consider the so called conjunction fallacy, if it was maladaptive for evaluating the truth of statements encountered normally it probably wouldn't exist. So in every day conversation (or at least the sort of situations that are relevant for the propagation of the memes and or genes involved) complex statements, at least of those kinds that can be observed to be evaluated "fallaciously", are probably more likely to be true.
3JoshuaZ11yThere isn't no way to estimate it. We can make reasonable estimations of probability based on the data we have (what we know about nanotech, what we know about brain function, what we know about chemical activity at very low temperatures, etc.). Moreover, it is always possible to estimate something's likelyhood, and one cannot simply say "oh, this is difficult to estimate accurately, so I'll assign it a low probability." For any statement A that is difficult to estimate, I could just as easily make the same argument for ~A. Obviously, both A and ~A can't both have low probabilities.
2[anonymous]11yThat's true; uncertainty about A doesn't make A less likely. It does, however, make me less likely to spend money on A, because I'm risk-averse.
4lsparrish11yHave you decided on a specific sum that you would spend based on your subjective impression of the chances of cryonics working?
3[anonymous]11yMaybe $50. That's around the most I'd be willing to accept losing completely.
3lsparrish11yNice. I believe that would buy you indefinite cooling as a neuro patient, if about a billion other individuals (perhaps as few as 100 million) are also willing to spend the same amount. Would you pay that much for a straight-freeze, or would that need to be an ideal perfusion with maximum currently-available chances of success?

???

"People who say stupid things are, all else being equal, more likely to say other stupid things in related areas".

2Vladimir_M11yThat's a very vague statement, however. How exactly should one identify those expressions of stupid opinions that are relevant enough to imply that the rest of the author's work is not worth one's time?

This description seems very British and I'm not quite clear on some of it. For instance, I had no idea what a strop is. Urban Dictionary defines it as sulking, being angry, or being in a bad mood.

Some of the other things seem like they would only make sense with more cultural context, specifically the emphasis on bantering and making witty remarks.

I wouldn't say that this guy has great social skills, given his getting drunk and stealing food, slamming doors and walking around naked, and so forth. Pretty much the opposite, in fact.

As to why he got kicked ou... (read more)

8whpearson11yBy social skills I meant what people with Aspergers lack naturally. Magnetism/charisma, etc. It is hard to get that across in a textual description. People with poor social skills here know not to get drunk and wander around naked, but can't charm the pants off a pretty girl. The point of the story is that having charisma is in itself not a get out of jail free card that is sometimes described here. Sorry for the british-ness. It is hard to talk about social situations without thinking in my native idiom. I'll try and translate it tomorrow.
2Blueberry11yYou're conflating a few different things here. There's seduction ability, which is its own unique set of skills (it's very possible to be good at seduction but poor at social skills; some of the PUA gurus fall in this category). There's the ability to pick up social nuances in real-time, which is what people with Aspergers tend to learn slower than others (though no one has this "naturally"; it has to be learned through experience). There's knowledge of specific rules, like "don't wander around naked". And charisma or magnetism is essentially confidence and acting ability. These skillsets are all independent: you can be good at some and poor at others. Well, of course not. For instance, if you punch someone in the face, they'll get upset regardless of your social skills in other situations. What this guy did was similar (though perhaps less extreme). Understood, and thanks for writing that story; it was really interesting. The whole British way of thinking is foreign to this clueless American, and I'm curious about it. (I'm also confused by the suggestion that being Facebook friends is a measure of intimacy.)

The fallacy here is thinking there's a difference between the way the ideal gas laws emerge from particle physics, and the way intelligence emerges from neurons and neurotransmitters. I've only heard "emergent" used in the following way:

A system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, and the high-level description is not easily predictable from the low-level description

For instance, gliders moving across the screen diagonally is emergent in Conway's Life.

The "easily predictable" part is what makes emergence in the map, not the territory.

Very interesting story about a project that involved massive elicitation of expert probabilities. Especially of interest to those with Bayes Nets/Decision analysis background. http://web.archive.org/web/20000709213303/www.lis.pitt.edu/~dsl/hailfinder/probms2.html

Machine learning is now being used to predict manhole explosions in New York. This is another example of how machine learning/specialized AI are becoming increasingly common place to the point where they are being used for very mundane tasks.

6billswift11ySomebody said that the reason there is no progress in AI is that once a problem domain is understood well enough that there are working applications in it, nobody calls it AI any longer.
4wnoise11yI think philosophy is a similar case. Physics used to be squarely in philosophy, until it was no longer a confused mess, but actually useful. Linguistics too used to be considered a branch of philosophy.
3Blueberry11yAs did economics.

They could talk about it elsewhere.

My understanding is that waitingforgodel doesn't particularly want to discuss that topic, but thinks that it's important that LW's moderation policy be changed in the future for other reasons. In that case it appears to me the best way to go about it is to try to convince Eliezer using rational arguments.

A public commitment has been made.

Commitment to a particular moderation policy?

Eliezer has a bias toward secrecy.

I'm inclined to agree, but do you have an argument that he is biased (instead of us)?

In my obse

... (read more)

Technical analysis does not imply bias either way. Just curiosity. ;)

I don't consider frogs to be objects of moral worth. -- Eliezer Yudkowsky

Yeah ok, frogs...but wait! This is the person who's going to design the moral seed of our coming god-emperor. I'm not sure if everyone here is aware of the range of consequences while using the same as corroboration of the correctness pursuing this route. That is, are we going to replace unfriendly AI with unknown EY? Are we yet at the point that we can tell EY is THE master who'll decide upon what's reasonable to say in public and what should be deleted?

Ask yourself if you really, seriously believe into the ideas posed on LW, enough to follow into the realms of radical oppression in the name of good and evil.

I don't understand social cues. I created an internal GLUT for social interaction, the same way I learned the English language and grammar. It took me a very long time, a lot of effort, and a willingness to update rapidly and on the fly.

My primary tools are the questions, "How so? You mean ___? Which part? What do you mean? What?" (short, open-ended interjections to provide free range for expansion), a judgment of when to use them, listening to and cataloging people's statements about their social reactions (in the same way one might learn what t... (read more)

It's just an example.

The importance of an argument doesn't matter for the severity of an error in reasoning present in that argument. The error might be unimportant in itself, but that it was made in an unimportant argument doesn't argue for the unimportance of the error.

The first two questions aren't about decisions.

"I live in a perfectly simulated matrix"?

This question is meaningless. It's equivalent to "There is a God, but he's unreachable and he never does anything."

2Blueberry11yNo, it's not meaningless, because if it's true, the matrix's implementers could decide to intervene (or for that matter create an afterlife simulation for all of us). If it's true, there's also the possibility of the simulation ending prematurely.

Slashdot having an epic case of tribalism blinding their judgment? This poster tries to argue that, despite Intelligent Design proponents being horribly wrong, it is still appropriate for them to use the term "evolutionist" to refer to those they disagree with.

The reaction seems to be basically, "but they're wrong, why should they get to use that term?"

Huh?

3ata11yI haven't regularly read Slashdot in several years, but I seem to recall that it was like that pretty much all the time.
2JoshuaZ11yThere's a legitimate reason to not want ID proponents and creationists to use the term "evolutionist" although it isn't getting stated well in that thread. In particular, the term is used to portray evolution as an ideology with ideological adherents. Thus, the use of the term "evolutionism" as well. It seems like the commentators in question have heard some garbled bit about that concern and aren't quite reproducing it accurately.

A second post has been banned. Strange: it was on a totally different topic from Roko's.

4Eliezer Yudkowsky11yStill the sort of thing that will send people close to the OCD side of the personality spectrum into a spiral of nightmares, which, please note, has apparently already happened in at least two cases. I'm surprised by this, but accept reality. It's possible we may have more than the usual number of OCD-side-of-the-spectrum people among us.
6Roko11ySo, this is the problem that didn't occur to me. I assumed implicitly that because such things were easy for me to brush off, the same logic would apply to others. Which is kind of silly, because I knew about one of the previous worriers from Benton House. I think that the bottom line here is that I need to update in favor of greater general caution surrounding anything to do with the singularity, AGI, etc.
5xamdam11yWas the discussion in question epistemologicaly interesting (vs. intellectual masturbation)? If so, how many OCD personalities joining the site would call for closing the thread? I am curious about decision criteria. Thanks. As an aside, I've had some SL-related psychological effects, particularly related to material notion of self: a bit of trouble going to sleep, realizing that logically there is little distinction from death-state. This lasted a short while, but then you just learn to "stop worrying and love the bomb". Besides "time heals all wounds" certain ideas helped, too. (I actually think this is an important SL, though it does not sit well within the SciFi hierarchy). This worked for me, but I am generally very low on the OCD scale, and I am still mentally not quite ready for some of the discussions going on here.
7Apprentice11yIt is impossible to have rules without Mr. Potter exploiting them. [http://www.fanfiction.net/s/5782108/32/Harry_Potter_and_the_Methods_of_Rationality]
2NancyLebovitz11yIs it OCD or depression? Depression can include (is defined by?) obsessively thinking about things that make one feel worse.
2JoshuaZ11yDepressive thinking generally focuses on short term issues or general failure. I'm not sure this reflects that. Frankly, it seems to come across superficially at least more like paranoia, especially of the form that one historically saw (and still sees) in some Christians worrying about hell and whether or not they are saved. The reaction to these threads is making me substantially update my estimates both for LW as a rational community and for our ability to discuss issues in a productive fashion.
4cousin_it11y(comment edited) I wonder why PlaidX's post isn't getting deleted - the discussion there is way closer to the forbidden topic.
2jimrandomh11yYep. But not unexpectedly this time; homung posted in the open thread that he was looking for 20 karma so he could post on the subject, and I sent him a private message saying he shouldn't, which he either didn't see or ignored.

So let me try to rewrite that (and don't be afraid to call this word salad):

(Note: the following comment is based on premises which are very probably completely unsound and unusually prone to bias. Read at your own caution and remember the distinction between impressions and beliefs. These are my impressions.)

You're Eliezer Yudkowsky. You live in a not-too-far-from-a-Singularity world, and a Singularity is a BIG event, decision theoretically and fun theoretically speaking. Isn't it odd that you find yourself at this time and place given all the people you ... (read more)

2Kevin11yIt all adds up to normality, damn it!

Have any LWers traveled the US without a car/house/lot-of-money for a year or more? Is there anything an aspiring rationalist in particular should know on top of the more traditional advice? Did you learn much? Was there something else you wish you'd done instead? Any unexpected setbacks (e.g. ended up costing more than expected; no access to books; hard to meet worthwhile people; etc.)? Any unexpected benefits? Was it harder or easier than you had expected? Is it possible to be happy without a lot of social initiative? Did it help you develop social initi... (read more)

When I was a young child my dad was building an extension on our house. He dug a deep trench for the foundations and one morning I came down to find what my hazy memory suggests were thousands of baby frogs from a nearby lake. I spent some time ferrying buckets of tiny frogs from the trench to the top of our driveway in a bid to save them from their fate.

The following morning on the way to school I passed thousands of flat baby frogs on the road. I believe this early lesson may have inured me to the starkly beautiful viciousness of nature.

I'm presuming he's talking about measure, using the standard Lebesgue measure on R

2JoshuaZ11yYes, although generally when asking these sorts of questions one looks at the standard Lebesque measure on [0,1] or [0,1) since that's easier to normalize. I've been told that this result also holds for any bell-curve distribution centered at 0 but I haven't seen a proof of that and it isn't at all obvious to me how to construct one.
2orthonormal11yWell, the quick way is to note that the bell-curve measure is absolutely continuous [http://en.wikipedia.org/wiki/Absolutely_continuous_measure#Absolute_continuity_of_measures] with respect to Lebesgue measure, as is any other measure given by an integrable distribution function on the real line. (If you want, you can do this by hand as well, comparing the probability of a small bounded open set in the bell curve distribution with its Lebesgue measure, taking limits, and then removing the condition of boundedness.)

Yes, that's what I'm saying.

And I'm not attempting to weaken or strengthen the case against anything in particular.

Just to be clear, I didn't learn about this via the Roko link (nor did I say in PM that I did), I used the Roko link after finding out about it on messages higher up in this thread (July 2010 open thread pt 2). Without the link I would have used the LW search bar.

No biggie, I wouldn't even mention it except that it seems to be your justification for voting weirdness.

4wedrifid11yThankyou. Finding out about the issue via a link from the top posts sounded improbable so I was surprised. This confirmation makes jimrandomh's voting scheme even more outrageous. "People don't approve of what Eliezer did to Roko... lets hide all evidence that Roko ever existed!"

Yes, again modulo not knowing how to analyze questions of who moves first (e.g. others who consider this and then make themselves not consider if he'll respond).

Ironically, your comment series is evidence that censorship partially succeeded in this case. Although existential risk could increase, that was not the primary reason for suppressing the idea in the post.

4timtyler11ySucceeded - in promoting what end?
2kodos9611yStreisand Effect [http://en.wikipedia.org/wiki/Streisand_effect]

I've actually speculated as to whether Eliezer was going MoR:Quirrel on us. Given that aggressive censorship was obviously going to backfire a shrewd agent would not use such an approach if they wanted to actually achieve the superficially apparent goal. Whenever I see an intelligent, rational player do something that seems to be contrary to their interests I take a second look to see if I am understanding what their real motivations are. This is an absolutely vital skill when dealing with people in a corporate environment.

Could it be the case that Eliezer is passionate about wanting people to consider torture:AIs and so did whatever he could to make it seem important to people, even though it meant taking a PR hit in the process? I actually thought this question through for several minutes before feeling it was safe to dismiss the possibility.

2kodos9611ySo I actually haven't read MoR - could you summarize the reference for me? I mean, I can basically see what you're saying from context, but is there anything beyond that it would be useful to know? My instinct is that it just doesn't feel like something Eliezer would do. But what do I know?
4wedrifid11yThere isn't much more to it than can be inferred from the context. MoR:Quirrel is just a clever, devious and rational manipulator. I don't either... but then that's the assumption MoR:Harry made about MoR:Dumbledore. At Quirrel's prompting Harry decided "it was time and past time to ask Draco Malfoy what the other side of that war had to say about the character of Albus Percival Wulfric Brian Dumbledore." :) (Of course EY hasn't been in a war and I don't think there are any people who accuse him of being an especially devious political manipulator.)

http://www.damninteresting.com/this-place-is-not-a-place-of-honor

Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won't in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and [some but not all others]

I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don't know, but there seems to be a dece... (read more)

3cousin_it11yI heard it and I don't think it's "wise" to defer to Eliezer's judgment on the matter. I stopped discussing this stuff on LW for a different reason: I feel LW is Eliezer's site and I'd better follow his requests here.
6XiXiDu11yEY might have a disproportionately high influence on us and our future. In this case I believe it is appropriate not to grant him certain rights, i.e. constrain his protection from criticism. He still has the option to ban people, further explain himself or just ignore it. But just censoring something that is by definition highly important and not stating any reasons for it makes me highly suspicious. Even more so if I'm told not to pursue this issue any further in the manner of a sacred truth you are not supposed to know.

It still works as a signal, because (1) signing a comment requires some extra effort, and (2) it is harder to retract a comment that has been signed (since the signature remains valid proof of authorship even if the original comment is edited or deleted). A little bit of real cost and utility goes a long way.

I don't know about Rain, but I'd be interested to read your answer.

My argument is that you ALMOST certainly don't care about ants at all, but that there is some extremely small uncertainty about what your values are. The disutility of getting a dust speck in your eye also has that argument.

And from this I can't infer whether communication succeeded or you are just making a social sound (not that it's very polite of me to remark this).

6Armok_GoB11yI first thought you had a problem with me making the number -1 000 000 from nowhere. Later I realized you meant that to some people it might not be obvious that the utility of 50 years of torture is the average utility per second time the number of seconds.

I don't post things like this because I think they're right, I post them because I think they are interesting. The geometry of TV signals and box springs causing cancer on the left sides of people's bodies in Western countries...that's a clever bit of hypothesizing, right or wrong.

In this case, an organization I know nothing about (Vetenskap och Folkbildning from Sweden) says that Olle Johansson, one of the researchers who came up with the box spring hypothesis, is a quack. In fact, he was "Misleader of the year" in 2004. What does this mean in

... (read more)

Who's right? Who knows. It's a fine opportunity to remain skeptical.

Bullshit. The 'skeptical' thing to do would be to take 30 seconds to think about the theory's physical plausibility before posting it on one's blog, not regurgitate the theory and cover one's ass with an I'm-so-balanced-look-there's-two-sides-to-the-issue fallacy.

TV-frequency EM radiation is non-ionizing, so how's it going to transfer enough energy to your cells to cause cancer? It could heat you up, or it could induce currents within your body. But however much heating it causes, the temperature increase caused by heat insulation from your mattress and cover is surely much greater, and I reckon you'd get stronger induced currents from your alarm clock/computer/ceiling light/bedside lamp or whatever other circuitry's switched on in your bedroom. (And wouldn't you get a weird arrhythmia kicking off before cancer anyway?)

(As long as I'm venting, it's at least a little silly for Kottke to say he's posting it because it's 'interesting' and not because it's 'right,' because surely it's only interesting because it might be right? Bleh.)

it's at least a little silly for Kottke to say he's posting it because it's 'interesting' and not because it's 'right'

Yup, that's the bit I thought made it appropriate for LW.

It reminded me of my speculations on "asymmetric intellectual warfare" - we are bombarded all day long with things that are "interesting" in one sense or another but should still be dismissed outright, if only because paying attention to all of them would leave us with nothing left over for worthwhile items.

But we can also note regularities in the patterns of which claims of this kind get raised to the level of serious consideration. I'm still perplexed by how seriously mainstream media takes claims of "electrosensitivity", but not totally surprised: there is something that seems "culturally appropriate" to the claims. The rate at which cell phones have spread through our culture has made "radio waves" more available as a potential source of worry, and has tended to legitimize a particular subset of all possible absurd claims.

If breast cancer and melanomas are more likely on the left side of the body at a level that's statistically significant, that's interesting even if the proposed explanation is nonsense.

5Morendil11yEven so, ISTM that picking through the linked article for its many flaws in reasoning would have been more interesting even than not-quite-endorsing its conclusions. What I find interesting is the question, what motivates an influential blogger with a large audience to pass on this particular kind of factoid? The ICCI blog has an explanation based on relevance theory and "the joy of superstition" [http://www.cognitionandculture.net/index.php?option=com_content&view=article&id=671:paul-the-octopus-relevance-and-superstitions&catid=29:dan&Itemid=34] , but unfortunately (?) it involves Paul the Octopus: (ETA: note the parallel between the above and "I post these things because they are interesting, not because they're right". And to be lucid, my own expectations of relevance get aroused for the same reasons as most everyone else's; I just happen to be lucky enough to know a blog where I can raise the discussion to the meta level.)

(So this is just about the first real post I made here and I kinda have stage fright posting here, so if its horribly bad and uninteresting and so please tell me what I did wrong, ok? Also, I've been frying to figure out the spelling and grammar and failed, sorry about that.) (Disclaimer: This post is humorous, and not everything should be taken all to seriously! As someone (Boxo) reviewing it put it: "it's like a contest between 3^^^3 and common sense!")

1) My analysis of http://lesswrong.com/lw/kn/torture_vs_dust_specks/

Lets say 1 second of tort... (read more)

5Vladimir_Nesov11yGiven some heavy utilitarian assumptions. This isn't an argument, it's more plausible to just postulate disutility of torture without explanation.
[-][anonymous]11y 3

It seems like there's at least some interesting in doing something to deal with helping people to develop posting skills through a means other than simply writing lots of articles and bombarding the community with them. The editorial system seems like it has a lot of promising aspects.

The main thing is, it seems more valuable to implement a weak system than to simply talk about implementing a stronger system so whether the editorial system is the best that can be done depends on whether the people in charge of the community are interested in implementing ... (read more)

3rhollerith_dot_com11yEY has stated in the past that the reason most suggestions do not result in a change in the web site is that no programmer (or no programmer that EY and EY's agents trust) is available to make the change. Also, I think he reads only a fraction of LW these months.

Sparked by my recent interested in PredictionBook.com, I went back to take a look at Wrong Tomorrow, a prediction registry for pundits - but it's down. And doesn't seem to have been active recently.

I've emailed the address listed on the original OB ANN for WT, but while I'm waiting on that, does anyone know what happened to it?

UDT/TDT understanding check: Of the 3 open problems Eliezer lists for TDT, the one UDT solves is counterfactual mugging. Is this correct? (A yes or no is all I'm looking for, but if the answer is no, an explanation of any length would be appreciated)

4Eliezer Yudkowsky11yYes.

Something I wonder about just how is how many people on LW might have difficulties with the metaphors used.

An example: In http://lesswrong.com/lw/1e/raising_the_sanity_waterline/, I still haven't quite figured what a waterline is supposed to mean in that context, or what kind of associations the word has, and neither had someone else I asked about that.

7Sniffnoy11yI think "waterline" here should be taken in the same context as "A rising tide floats all boats".
[-][anonymous]11y 3

Are there any Less Wrongers in the Grand Rapids area that might be interested in meeting up at some point?

2Psy-Kosh11yGrand Rapids, MI, you mean? I'm in Michigan, but West Bloomfield, so a couple hours away, but still, if we found some more MI LWers, maybe.

This is my PGP public key. In the future, anything I write which seems especially important will be signed. This is more for signaling purposes than any fear of impersonation -- signing a post is a way to strongly signal its seriousness.

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.7 (Cygwin)

mQGiBExOb4IRBAClNdK7kU0hDjEnR9KC+ga8Atu6IJ5pS9rKzPUtV9HWaYiuYldv
VDrMIFiBY1R7LKzbEVD2hc5wHdCUoBKNfNVaGXkPDFFguJ2D1LRgy0omHaxM7AB4
woFmm4drftyWaFhO8ruYZ1qSm7aebPymqGZQv/dV8tSzx8guMh4V0ree3wCgzaVX
wQcQucSLnKI3VbiyZQMAQKcEAI9aJRQoY1WFWaGDsAzCKBHtJIEooc+3+/2S
... (read more)
2bogus11yYou may want to copy this key block to a user page on the LW wiki, where it can be easily referenced in the future.
2khafra11yThat would also have the advantage of hopefully requiring different credentials to access, so it would be marginally harder to change the recorded public key while signing a forged post with it.
4bogus11yNot just harder; it would be all but impossible since the wiki keeps a hstory of all changes (unlike LW posts) and jimrandomh is not a wiki sysop.

Given all the recent discussion of contrived infinite torture scenarios, I'm curious to hear if anyone has reconsidered their opinion of my post on Dangerous Thoughts. I am specifically not interested in discussing the details or plausibility of said scenarios.

2[anonymous]11yYes. I previously believed that thinking a true statement could only be harmful by either leading to a false statement, stealing cognitive resources, or lowering confidence. I also believed that general rationality plus a few meditative tricks would be a sufficient and fully general defense against all such harms. I know better now.

I have exactly the same problem. I think I understand where mine comes from, from being abused by my older siblings. I have Asperger's, so I was an easy target. I think they would sucker me in by being nice to me, then when I was more vulnerable whack me psychologically (or otherwise). It is very difficult for me to accept praise of any sort because it reflexively puts me on guard and I become hypersensitive.

You can't get psychotherapy from a friend, it doesn't work and can't work because the friendship dynamic gets in the way (from both directions). A good therapist can help a great deal, but that therapist needs to be not connected to your social network.

2daedalus2u11yThe issue that are dealt with in psychotherapy are fundamentally non-rational issues. Rational issues are trivial to deal with (for people who are rationalists). The substrate of the issues dealt with in psychotherapy are feelings and not thoughts. I see feelings as an analog component of the human utility function. That analog component affects the gain and feedback in the non-analog components. The feedback by which thoughts affect feelings is slow and tenuous and takes a long time and considerable neuronal remodeling. That is why psychotherapy takes a long time, the neuronal remodeling necessary to affect feelings is much slower than the neuronal remodeling that affects thoughts. A common response to trauma is to dissociate and suppress the coupling between feelings and thoughts. The easiest and most reliable way to do this is to not have feelings because feelings that are not felt cannot be expressed and so cannot be observed and so cannot be used by opponents as a basis of attack. I think this is the basis of the constricted affect of PTSD.
2cousin_it11yYep - I'm having some fun there right now, my nick is want_to_want. Anyone knowledgeable in psych research, join in!

Do you like the LW wiki page (actually, pages) on Free Will? I just wrote a post to Scott Aaronson's blog, and the post assumed an understanding of the compatibilist notion of free will. I hoped to link to the LW wiki, but when I looked at it, I decided not to, because the page is unsuitable as a quick introduction.

EDIT: Come over, it is an interesting discussion of highly LW-relevant topics. I even managed to drop the "don't confuse the map with the territory"-bomb. As a bonus, you can watch the original topic of Scott's post: His diavlog with A... (read more)

If only for the cheap signaling value.

My point was that the action may have psychological value for oneself, as a way of getting in the habit of taking concrete steps to reduce suffering -- habits that can grow into more efficient strategies later on. One could call this "signaling to oneself," I suppose, but my point was that it might have value in the absence of being seen by others. (This is over and above the value to the worm itself, which is surely not unimportant.)

I find it difficult to explain, but know that I disagree with you. The world is worth saving precisely because of the components that make it up, including frogs. Three does follow from 1, unless you have a (fairly large) list of properties or objects in the world that you've deemed out of scope (not worth saving independently of the entire world). Do you have such a list, even implicitly? I might agree that frogs are out of scope, as that was one component of my motivation for posting this thread.

And stating that there are "more efficient" ways ... (read more)

I'm surprised by Eliezer's stance. At the very least, it seems the pain endured by the frogs is terrible, no? For just one reference on the subject, see, e.g., KL Machin, "Amphibian pain and analgesia," Journal of Zoo and Wildlife Medicine, 1999.

Rain, your dilemma reminds me of my own struggles regarding saving worms in the rain. While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it's probably an activity worth doing, because it builds the psychological habit of exerting effort to break from one's routine of personal comfort and self-maintenance in order to reduce the pain of other creatures. It's easy to say, "Oh, that's not the most cost-effective use of my time," but it can become too easy to say that all the time to the extent that one never ends up doing anything. Once you start doing something to help, and get in the habit of expending some effort to reduce suffering, it may actually be easier psychologically to take the efficiency of your work to the next level. ("If saving worms is good, then working toward technology to help all kinds ... (read more)

I don't know. I'm not Eliezer. I'd save the frogs because it's fun, not because of some theory.

John Hari - My Experiment With Smart Drugs (2008)

How does everyone here feel about these 'Smart Drugs'? They seem quire tempting to me, but are there candidates that have been in use long and considered safe?

3NancyLebovitz11yIt surprised me that he didn't consider taking provigil one or two days a week. It also should have surprised me (but didn't-- it just occurred to me) that he didn't consider testing the drugs' effects on his creativity.
3arundelo11yThere's some discussionhere [http://lesswrong.com/lw/10u/ask_lesswrong_human_cognitive_enhancement_now/]and here [http://lesswrong.com/lw/fu/share_your_antiakrasia_tricks/cl7].

Hm. It's unfortunate that I need to pass all of my ideas through a Nick Tarleton or a Steve Rayhawk before they're fit for general consumption. I'll try to rewrite that whole comment when I'm less tired.

3Vladimir_Nesov11yIllusion of transparency: they can probably generate sense in response to anything, but it's not necessarily faithful translation of what you say.
2Will_Newsome11yConsider that one of my two posts, Abnormal Cryonics, was simply a narrower version of what I wrote above (structural uncertainty is highly underestimated) and that Nick Tarleton wrote about a third of that post. He understood what I meant and was able to convey it better than I could. Also, Nick Tarleton is quick to call bullshit if something I'm saying doesn't seem to be meaningful, which is a wonderful trait.
3Vladimir_Nesov11yWell, that was me calling bullshit.
3Kevin11yWas your point that Eliezer's Everett Branch is weird enough already that it shouldn't be that surprising if universally improbable things have occurred?
2Will_Newsome11yErm, uh, kinda, in a more general sense. See my reply to my own comment where I try to be more expository.
[-][anonymous]11y 3

I figure the open thread is as good as any for a personal advice request. It might be a rationality issue as well.

I have incredible difficulty believing that anybody likes me. Ever since I was old enough to be aware of my own awkwardness, I have the constant suspicion that all my "friends" secretly think poorly of me, and only tolerate me to be nice.

It occurred to me that this is a problem when a close friend actually said, outright, that he liked me -- and I happen to know that he never tells even white lies, as a personal scruple -- and I ... (read more)

4[anonymous]11yUpdate for the curious: did talk to a friend (the same one mentioned above, who, I think, is a better "shrink" than some real shrinks) and am now resolved to kick this thing, because sooner or later, excessive approval-seeking will get me in trouble. I'm starting with what I think of as homebrew CBT: I will not gratuitously apologize or verbally belittle myself. I will try to replace "I suck, everyone hates me" thoughts with saner alternatives. I will keep doing this even when it seems stupid and self-deluding. Hopefully the concrete behavioral stuff will affect the higher-level stuff. After all. A mathematician I really admire gave me career advice -- and it was "Believe in yourself." Yeah, in those words, and he's a logical guy, not very soft and fuzzy.
3khafra11yAlicorn's Living Luminously [http://lesswrong.com/lw/1xh/living_luminously/] series covers some methods of systematic mental introspection and tweaking like this. The comments on alief [http://lesswrong.com/lw/1xh/living_luminously/1ruj?context=1] are especially applicable.
2WrongBot11yFor what it's worth, this is often known as Imposter Syndrome [http://en.wikipedia.org/wiki/Impostor_syndrome], though it's not any sort of real psychiatric diagnosis. Unfortunately, I'm not aware of any reliable strategies for defeating it; I have a friend who has had similar issues in a more academic context and she seems to have largely overcome the problem, but I'm not sure as to how.

An object lesson in how not to think about the future:

http://www.futuretimeline.net/

(from Pharyngula)

2Christian_Szegedy11yCould be funny, if it was a joke... :(

Is there any possibility of constructing some kind of frog barrier at the top of the stairwell or amphibian escape ramp (PVC pipe?) or does the layout of the public space make that impractical? My preference would be for an engineering solution if I hypothetically valued frog survival highly. A web-cam activated frog elevator would be entertaining but probably overkill.

Of course this may not be optimal if the warm-fuzzies from individual frog-assisting episodes are of greater expected utility than automated out-of-sight out-of-mind frog moving machinery.

I just finished polishing off a top level post, but 5 new posts went up tonight - 3 of them substantial. So I ask, what should my strategy be? Should I just submit my post now because it doesn't really matter anyway? Or wait until the conversation dies down a bit so my post has a decent shot of being talked about? If I should wait, how long?

3[anonymous]11yDefinitely wait. My personal favorite timing is one day for each new (substantial) post.

Oddly enough, I think Morendil would get a real kick out of JavaScript. So much in JS involves passing functions around, usually carrying around some variables from their enclosing scope. That's how the OO works; it's how you make callbacks seem natural; it even lets you define new control-flow structures like jQuery's each() function, which lets you pass in a function which iterates over every element in a collection.

The clearest, most concise book on this is Doug Crockford's Javascript: The Good Parts. Highly recommended.

The problem with religious beliefs is not that they are false (they don't have to be), but that they are believed for the purpose of signaling belonging to a group, rather than because they are true. This does cause them to often be wrong or not even wrong, but the wrongness is not the problem, epistemic practices that lead to them are. Correspondingly, the reasons for a given religious belief turning out to be wrong are a different kind of story from the reasons for a given factual belief turning out to be wrong. The comparison of factual mistakes in religious beliefs and factual mistakes made by people who try to figure things out is a shallow analogy that glosses over the substance of the processes.

This isn't a general property of irrational numbers, although with probability 1 any irrational number will have this property. In fact, any random real number will have this property with probability 1 (rational numbers have measure 0 since they form a countable set). This is pretty easy to prove if one is familiar with Lebesque measure.

There are irrational numbers which do not share this property. For example, .101001000100001000001... is irrational and does not share this property.

Yes. My point was that emergence isn't about what we know how to derive from lower-level descriptions, it's about what we can easily see and predict from lower-level descriptions. Like Roko, I want my definition of emergence to include the ideal gas laws (and I haven't heard the word used to exclude them).

Also see this comment.

Not in python 3 ! range in Python 3 works like xrange in the previous versions (and xrange doesn't exist any more).

(but the print functions would use a different syntax)

4apophenia11yIn fact, range in Python 2.5ish and above works the same, which is why they removed xrange in 3.0.

That actually seems to be the main reason we would try to make him work within the LW power structure.

I don't think it was the main reason for my suggestion. I thought that threatening Eliezer with existential risk was obviously a suboptimal strategy for wfg, and looked for a better alternative to suggest to him. Rational argument was the first thing that came to mind since that's always been how I got what I wanted from Eliezer in the past.

You might be right that there are other even more effective approaches wfg could take to get what he wants, but to... (read more)

1wedrifid11yI am trying to remember the reference to Eliezer's discussion of keeping science safe by limiting it to people who are able to discover it themselves. ie. Security by FOFY. I know he has created a post somewhere but don't have the link (or keyword cache). If I recall he also had Harry preach on the subject and referenced an explicit name. I wouldn't go so far as to say the idea is useless but I also don't quite have Eliezer's faith. I also wouldn't want to reply to a straw man from my hazy recollections.

Hey Jim,

It sounds like my post rubbed you the wrong way, that wasn't my intention.

I do understand your math (world pop / a mil), did you understand mine?

Providing a credible threat reduces existential risk and saves lives... significantly more than the 6700 you cite.

Check out this article and the wikipedia article on MAD, then reread the post you're replying to and see if it makes more sense. The Wei Dai exchange might also help shed some light. If you ask questions here I'll do my best to walk you through anything you get stuck on.

I don't feel comfortable... (read more)

(Please don't vote unless you've read the whole thread found here)

I did not choose to downvote the parent based on this but I was tempted. I may have upvoted without the prescription.

2waitingforgodel11yFair enough, we need to figure out a better way to navigate to the relevant part of "open thread" posts. The load comments above link doesn't load comments below what's above :-/ Usability, speaking the truth, and avoiding redundant comments are much more important than votes to me, if I could type it again i'd go with: please don't reply unless you've read the whole thread.

Perhaps you weren't aware, but Eliezer has stated that it's rational to not respond to threats of blackmail.

I'm also pretty sure it's irrational to ignore such things when making decisions. Perhaps not in a game theory sense, but absolutely in the practical life-theory sense.

As an example, our entire legal system is based on these sorts of credible threats.

To be precise, not respond when whether or not one is 'blackmailed' is counterfactually dependent on whether one would respond, which isn't the case with the law. (Of course, there are unresolved problems with who 'moves first', etc.)

Either I misunderstand CEV, or the above statement re: the Abrahamic god following CEV is false.

2XiXiDu11yCoherent Extrapolated Volition [http://intelligence.org/upload/CEV.html] This is exactly the argument religious people use to excuse any shortcomings of their personal FAI. Namely, their personal FAI knows better than you what's best for your AND everyone else. What average people do is follow what is being taught here on LW. They decide based on their prior. Their probability estimates tell them that their FAI is likely to exist and make up excuses for extraordinary decisions based on the possible existence of it. That is, support their FAI while trying to inhibit other uFAI all in the best interest of the world at large.
3katydee11yThe link and quotation you posted do not seem to back up your argument that the Abrahamic god follows CEV. Could you clarify?
3XiXiDu11yIt's not about it following CEV but people believing it, that it acts in their best interest. Reasons are subordinate. It is the similar systematic of positive and negative incentive that I wanted to highlight. I grew up in a family of Jehovah's Witnesses. I can assure you that all believed this to be the case. Faith is considered the way to happiness. Positive incentive: Negative incentive: I could find heaps of arguments for Christianity that highlight the same believe of God knowing what's the best for you and the world. This is what most people on this planet believe and this is also the underpinning of the rapture of the nerds.
2katydee11yAh, I understand-- except that I think the "negative incentive" element we're discussing is absurd, would obviously trigger failsafes with CEV as described, etc.
1XiXiDu11yThere'll always be elements that suffer, that is perceive FAI as uFAI subjectively.
1orthonormal11yYahweh and the associated moral system are far from incomprehensible if you know the cultural context of the Israelites. It's a recognizably human morality, just a brutal one obsessed with purity of various sorts.
7XiXiDu11yIt is not about the moral system being incomprehensible but the acts of the FAI. Whenever something bad happens religious people excuse it with an argument based on "higher intention". This is the gist of what I wanted to highlight. The similarity between religious people and those true believers into the technological singularity and AI's. This is not to say it is the same. I'm not arguing about that. I'm saying that this might draw the same kind of people committing the same kind of atrocities. This is very dangerous. If people don't like anything happening, i.e. don't understand it, it's claimed to be a means to an end that will ultimately benefit their extrapolated volition. People are not going to claim this in public. But I know that there are people here on LW who are disposed to extensive violence if necessary. To be clear, I do not doubt the possibilities talked about on LW. I'm not saying they are nonsense like the old religions. What I'm on about is that the ideas the SIAI is based on, while not being nonsense, are posed to draw the same fanatic fellowship and cause the same extreme decisions. Ask yourself, wouldn't you fly a plane into a tower if that was the only way to disable Skynet? The difference between religion and the risk of uFAI makes it even more dangerous. This crowd is actually highly intelligent and their incentive based on more than fairy tales told by goatherders. And if dumb people are already able to commit large-scale atrocities based on such nonsense, what are a bunch of highly-intelligent and devoted geeks who see a tangible danger able and willing to do? More so as in this case the very same people who believe it are the ones who think they must act themselves because their God doesn't even exist yet.

Ask yourself, wouldn't you fly a plane into a tower if that was the only way to disable Skynet?

Yes. I would also drop a nuke on New York if it were the only way to prevent global nuclear war. These are both extremely unlikely scenarios.

It's very correct to be suspicious of claims that the stakes are that high, given that irrational memes have a habit of postulating such high stakes. However, assuming thereby that the stakes never could actually be that high, regardless of the evidence, is another way of shooting yourself in the foot.

2[anonymous]11yI do not assume that. I've always been a vegetarian who's in favor of animal experiments. I'd drop a nuke to prevent less than what you described above.

public criticism != public embarassment

The former might be highly useful, if one chooses to learn from it, the latter is almost never useful.

2SilasBarta11yThen maybe Crono shouldn't have brought it up repeatedly? (And the things I was referring to in my previous remarks crossed over into embarassment, despite also being criticisms.)

But PGP's security and quality pretty much make up for that loss in signaling seriousness, don't you think?

It was about the possibility of torturing someone by creating copies of the person and torturing them.

I consider these results perfectly intuitive, why shouldn't they be? 3^^^3 is a really big number, it makes sense you have to be really careful around it.

[-][anonymous]11y 2

I would like to help. And I may see your problem, you are very sensitive to insults from others but have a harder time seeing when you hurt others. But when you do so you are affected greatly.

I just have no idea how to help. My skin is probably overly thick. When I perceive insults, I tend to ignore them and downgrade the insulter as someone with whom I want to interact with or help, rather than kick up a social fuss.

Do you want answers to those questions, or are they purely rhetorical?

2SilasBarta11yYes, I want answers.
3daedalus2u11yThis is how people with Asperger's or autism experience interacting with people who are neurotypically developed (for the most part).

Hold on -- those are important articles to read, and they do move you toward a resolution of that problem. But I don't think they fully dissolve/answer the exact question daedalus2u is asking.

For example, EY has written this article, grappling with but ultimately not resolving the question of whether you should care about "other copies" of you, why you are not indifferent between yourself vs. someone else jumping off a cliff, etc.

I don't deny that the existing articles do resolve some of the problems daedulus2u is posing, but they don't cover everything he asked.

Unless I've missed something?

Eliezer's sequence on quantum mechanics and personal identity is almost exactly what you're looking for, I think.

Is it my imagination, or is "social construct" the sociologist version of "emergent phenomenon"?

Something weird is going on. Every time I check, virtually all my recent comments are being steadily modded up, but I'm slowly losing karma. So even if someone is on an anti-Silas karma rampage, they're doing it even faster than my comments are being upvoted.

Since this isn't happening on any recent thread that I can find, I'd like to know if there's something to this -- if I made a huge cluster of errors on thread a while ago. (I also know someone who might have motive, but I don't want to throw around accusations at this point.)

I tend to vote down a wide swath of your comments when I come across them in a thread such as this one or this one, attempting to punish you for being mean and wasting peoples' time. I'm a late reader, so you may not notice those comments being further downvoted; I guess I should post saying what I've done and why.

In the spirit of your desire for explanations, it is for the negative tone of your posts. You create this tone by the small additions you make that cause the text to sound more like verbal speech, specifically: emphasis, filler words, rhetorical questions, and the like. These techniques work significantly better when someone is able to gauge your body language and verbal tone of voice. In text, they turn your comments hostile.

That, and you repeat yourself. A lot.

6NancyLebovitz11yThis reminds me of something I mentioned as an improvement for LW a while ago, though for other reasons-- the ability to track all changes in karma for one's posts.
2xamdam11yI see this as a feature request - would be great to have a view of your recent posts/comments that had action (karma or descendant comments). (rhetorically) If karma is meant as feedback, this would be a great way to get it.

The appropriate question to ask is:

Given the number of people who play all the different kinds of lotteries, what are the odds of there being some person who wins four (modest) jackpots?

Incidentally, three wins came from scratch-off tickets, which seem inherently less secure than the ones with a central drawing. (And you can also do something akin to card-counting with them: the odds change depending on how many tickets have already been sold and how many prizes have been claimed. Some states make this information public, so you can sometimes find tickets with a positive expected value in dollars.)

If a belief doesn't fit an ideological or religious framework, I think that X-ist and ism are often bad. I actually use the phrases "ID proponent" fairly often partially for this reason. I'm not sure however that this case is completely symmetric given that ID proponents self-identify as part of the "intelligent design movement" (a term used for example repeatedly by William Dembski and occasionally by Michael Behe.)

So I was pondering doing a post on the etiology of sexual orientation (as a lead-in to how political/moral beliefs lead to factual ones, not vice versa).

I came across this article, which I found myself nodding along with, until I noticed the source...

Oops! Although they stress the voluntary nature of their interventions, NARTH is an organization devoted to zapping the fabulous out of gay people, using such brilliant methodology as slapping a rubber band against one's wrist every time one sees an attractive person with the wrong set of chromosomes. From the... (read more)

4WrongBot11yFor what it's worth, rubber band snapping is a pretty popular thought-stopping technique in CBT for dealing with obsessive-type behaviors, though I believe there's some debate over how effective it is. I know it's been used to address morbid jealousy [http://books.google.com/books?id=R-GRXVsFFEgC&lpg=PA215&ots=nK0QdveQxj&dq=stephen%20josephson%20jealousy&lr&pg=PA223#v=onepage&q=stephen%20josephson%20jealousy&f=false] , though I don't know to what extent or if more scientific studies have been conducted.

There is something that bother's me and I would like to know if it bothers anyone else. I call it "Argument by Silliness"

Consider this quote from the Allais Malaise post: "If satisfying your intuitions is more important to you than money, do whatever the heck you want. Drop the money over Niagara Falls. Blow it all on expensive champagne. Set fire to your hair. Whatever."

I find this to be a common end point when demonstrating what it means to be rational. Someone will advance a good argument that correctly computes/deduces how you... (read more)

2NancyLebovitz11yYeah-- argument by silliness (I think I'd describe it as finding something about the argument which can be made to sound silly) is one of the things I don't like about normal people.
2Peter_Lambert-Cole11yThat's why it can be such an effective tactic when persuading normal people. You can get them to commit to your side and then they rationalize themselves into believing it's truth (which it is) because they don't want to admit they were conned.

Luke Muehlhauser just posted about Friendly AI and Desirism at his blog. It tends to have a more general audience than LW, comments posted there could help spread the word. Desirism and the Singularity

Desirism and the Singularity, in which one of my favourite atheist communities is inching towards singularitarian ideas.

Looks like Emotiv's BCI is making noticeable progress (from the Minsky demo)

http://www.ted.com/talks/tan_le_a_headset_that_reads_your_brainwaves.html

but still using bold guys :)

I'm not saying we exist in all branches, just that all branches are necessary, and therefore we are necessary also. Essentially I'm saying that everything that is actual is necessary.