Related To: Are Your Enemies Innately Evil?, Talking Snakes: A Cautionary Tale, How to Not Lose an Argument

Eliezer's excellent article Are Your Enemies Innately Evil? points to the fact that when two people have a strong disagreement it's often the case that each person sincerely believes that he or she is on the right side. Yvain's excellent article Talking Snakes: A Cautionary Tale highlights that the fact that to each such person, without knowledge of the larger context that the other person's beliefs fit into, the other person's beliefs can appear to be absurd. The frequency with which this phenomenon occurs is sufficiently high so that it's important for each participant in an argument to make a strong effort to understand where the other person is coming from and to frame one's own ideas with the other person's perspective in mind.

Last month I made a sequence of posts [1], [2], [3], [4] raising concerns about the fruitfulness of SIAI's approach to reducing existential risk. My concerns were sincere and I made my sequence of postings in good faith. All the same, there's a sense in which my sequence of postings was a failure. In the first of these I argued that the SIAI staff should place greater emphasis on public relations. Ironically, in my subsequent postings I myself should have placed greater emphasis on public relations. I made mistakes which damaged my credibility and barred me from serious consideration by some of those who I hoped to influence.

In the present posting I catalog these mistakes and describe the related lessons that I've learned about communication.

Mistake #1: Starting during the Singularity Summit

I started my string of posts during the Singularity Summit. This was interpreted by some to be underhanded and overly aggressive. In fact, the coincidence of my string of posts with the Singularity Summit was influenced more by the appearance of XiXiDu's Should I Believe What the SIAI claims? than anything else, but it's understandable that some SIAI supporters would construe the timing of my posts as premeditated and hostile in nature. Moreover, the timing of my posts did not give the SIAI staff a fair chance to respond real time. I should have avoided posting during a period of time when I knew that the SIAI staff would be occupied, waiting until a week after the Singularity Summit to begin my sequence of posts.

Mistake #2: Failing to balance criticism with praise

As Robin Hanson says in Against Disclaimers:

If you say anything nice (or critical) about anything associated with a group or person you are presumed to support (or oppose) them overall.

I don't agree with Hanson that people are wrong to presume this - I think that statistically speaking, the above presumption is correct.

For this reason, it's important to balance criticism of a group which one does not oppose with praise. I think that a number of things that SIAI staff have done have had expected favorable impacts on existential risk, even if I think other things they have done have negative expected impact. By failing to make this point salient, I mislead Airedale and others to believe that I have an agenda against SIAI.

Mistake #3: Letting my emotions get the better of me

My first pair of postings attracted considerable criticism, most of which which appeared to me to be ungrounded. I unreasonably assumed that these criticisms were made in bad faith, failing to take to heart the message of Talking Snakes: A Cautionary Tale that one's positions can appear to be absurd to those who have access to a different set of contextual data from one's own. As Gandhi said:

...what appears to be truth to the one may appear to be error to the other.

We're wired to generalize from one example and erroneously assume that others have the same access to the same context that we do. As such, it's natural for us to assume that when other strongly disagree with us it's because they're unreasonable people. While this is understandable, it's conducive to emotional agitation which when left unchecked typically leads to further misunderstanding.

I should have waited until I had returned to emotional equilibrium before continuing my string of postings beyond the first two. Because I did not wait until returning emotional equilibrium, my final pair of postings was less effectiveness-oriented than it should have been and more about satisfying my immediate need for self-expression. I wholeheartedly agree with a relevant quote by Eliezer from Circular Altruism:

This isn't about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain's feelings of comfort or discomfort with a plan

Mistake #4: Getting personal with insufficient justification

As Eliezer has said in Politics is the Mind-Killer, it's best to avoid touching on emotionally charged topics when possible. One LW poster who's really great at this and who I look to as a role model in this regard is Yvain.

In my posting on The Importance of Self-Doubt I levied personal criticisms which many LW commentators felt uncomfortable with [1], [2], [3], [4]. It was wrong for me to make such personal criticisms without having thoroughly explored alternate avenues for accomplishing my goals. At least initially, I could have spoken in more general terms as prase did in a comment on my post - this may have sufficed to accomplish my goals without the need to discuss the sensitive subject matter that I did.

Mistake #5: Failing to share my posts with an SIAI supporter before posting

It's best to share one's proposed writings with a member of a given group before offering public criticisms of the activities of members of the said group. This gives him or her an opportunity to respond and provide context which one may be unaware of. After I made my sequence of postings, I had extensive dialogue with SIAI Visiting Fellow Carl Shulman. In the course of this dialogue I realized that I had crucial misconceptions about some of SIAI's activities. I had been unaware of some of the activities which SIAI staff have been engaging in; activities which I judge to have significant positive expected value. I had also misinterpreted some of SIAI's policies in ways that made them look worse than they now appear to me to be.

Sharing my posts with Carl before posting would have given me the opportunity to offer a more evenhanded account of SIAI's activities and would have given me the feedback needed to avoid being misinterpreted.

Mistake #6: Expressing apparently absurd views before contextualizing them

In a comment to one of my postings, I expressed very low confidence in the success of Eliezer's project. In line with Talking Snakes: A Cautionary Tale, I imagine that a staunch atheist would perceive a fundamentalist Christian's probability estimate of the truth of Christianity to be absurd and that on the flip side a fundamentalist Christian would perceive a staunch atheist's probability estimate of the truth of Christianity to be absurd. In absence of further context, the beliefs of somebody coming from a very different worldview inevitably seem absurd independently of whether or not they're well grounded.

There are two problems with beginning a conversation on a topic by expressing wildly different positions from those of one's conversation partners. One is that this tends to damage one's own credibility in one's conversation partner's eyes. The other is that doing so often carries an implicit suggestion that one's conversation partners are very irrational. As Robin Hanson says in Disagreement is Disrespect:

...while disagreement isn’t hate, it is disrespect.  When you knowingly disagree with someone you are judging them to be less rational than you, at least on that topic.

Extreme disagreement can come across as extreme disrespect. In line with what Yvain says in How to Not Lose an Argument, expressing extreme disagreement usually has the effect of putting one's conversation partners on the defensive and is detrimental to their ability to Leave a Line of Retreat.

In a comment on my Existential Risk and Public Relations posting Vladimir_Nesov said

The level of certainty is not up for grabs. You are as confident as you happen to be, this can't be changed. You can change the appearance, but not your actual level of confidence. And changing the apparent level of confidence is equivalent to lying.

I disagree with Vladimir_Nesov that changing one's apparent level of confidence is equivalent to lying. There are many possible orders in which one can state one's beliefs about the world. At least initially, presenting the factors that lead one to one's conclusion before presenting one's conclusion projects a lower level of confidence in one's conclusion than presenting one's conclusions before presenting the factors that lead one to these conclusions. Altering one's order of presentation in this fashion is not equivalent to lying and moreover is actually conducive to rational discourse.

As Hugh Ristik said in response to Reason is not the only means of overcoming bias,

The goal of using these forms of influence and rhetoric is not to switch the person you are debating from mindlessly disagreeing with you to mindlessly agreeing with you.

[..]

One of the best ways to change the minds of people who disagree with you is to cultivate an intellectual friendship with them, where you demonstrate a willingness to consider their ideas and update your positions, if they in return demonstrate the willingness to do the same for you. Such a relationship rests on both reciprocity and liking. Not only do you make it easier for them to back down and agree with you, but you make it easier for yourself to back down and agree with them.

When you have set up a context for the discussion where one person backing down isn't framed as admitting defeat, then it's a lot easier to do. You can back down and state agreement with them as a way to signal open-mindedness and the willingness to compromise, in order to encourage those qualities also in your debate partner. Over time, both people's positions will shift towards each other, though not necessarily symmetrically.

Even though this sort of discourse is full of influence, bias, and signaling, it actually promotes rational discussion between many people better than trying to act like Spock and expecting people you are debating to do the same.

I should have preceded my expression of very low confidence in the success of Eliezer's project with a careful and systematic discussion of the factors that led me to my conclusion. 

Aside from my failure to give proper background for my conclusion, I also failed to be sufficiently precise in stating my conclusion. One LW poster interpreted my reference to "Eliezer's Friendly AI project" to be "the totality of Eliezer's efforts to lead to the creation of a Friendly AI." This is not the interpretation that I intended - in particular I was not including Eliezer's networking and advocacy efforts (which may be positive and highly significant) under the umbrella of "Eliezer's Friendly AI project." By "Eliezer's Friendly AI project" I meant "Eliezer's attempt to unilaterally build a Friendly AI that will go FOOM in collaboration with a group of a dozen or fewer people." I should have made a sharper claim to avoid the appearance of overconfidence.

Mistake #7: Failing to give sufficient context for my remarks on transparency and accountability

After I made my Transparency and Accountability posting, Yvain commented

The bulk of this is about a vague impression that SIAI isn't transparent and accountable. You gave one concrete example of something they could improve: having a list of their mistakes on their website. This isn't a bad idea, but AFAIK GiveWell is about the only charity that currently does this, so it doesn't seem like a specific failure on SIAI's part not to include this. So why the feeling that they're not transparent and accountable?

In my own mind it was clear what I meant by transparency and accountability, but my perspective is sufficiently exotic so that it's understandable that readers like Yvain would find my remarks puzzling or even incoherent. One aspect of the situation is that I share GiveWell's skeptical Bayesian prior. In A conflict of Bayesian priors? Holden Karnofsky says:

When you have no information one way or the other about a charity’s effectiveness, what should you assume by default?

Our default assumption, or prior, is that a charity - at least in one of the areas we’ve studied most, U.S. equality of opportunity or international aid - is falling far short of what it promises donors, and very likely failing to accomplish much of anything (or even doing harm). This doesn’t mean we think all charities are failing - just that, in the absence of strong evidence of impact, this is the appropriate starting-point assumption.

I share GiveWell's skeptical prior when it comes to the areas that GiveWell has studied most and feel that it's justified when applied to the cause of existential risk reduction to an even greater extent for the reason given by prase:

The problem is, if the cause is put so far in the future and based so much on speculations, there is no fixed point to look at when countering one's own biases, and the risk of a gross overestimation of one's agenda becomes huge.

Because my own attitude toward the viability of philanthropic endeavors in general is so different from that of many LW posters, when I suggested that SIAI is insufficiently transparent and accountable, many LW posters felt that I was unfairly singling out SIAI. Statements originating from a skeptical Bayesian prior toward philanthropy are easily misinterpreted in this fashion. As Holden says:

This question might be at the core of our disagreements with many

[...]

Many others seem to have the opposite prior: they assume that a charity is doing great things unless it is proven not to be. These people are shocked that we hand out “0 out of 3 stars” for charities just because so little information is available about them; they feel the burden of proof is on us to show that a charity is not accomplishing good.

I should have been more precise about my explicit about my Bayesian prior before suggesting that SIAI should be more transparent and accountable. This would have made it more clear that I was not singling SIAI out. Now, in the body of my original post I attempted to allude to my skeptical Bayesian prior in the body of my posting when I said :

I agree ... that in evaluating charities which are not transparent and accountable, we should assume the worst.

but this statement was itself prone to misinterpretation. In particular, some LW posters interpreted it literally when I had intended "assume the worst" to be a shorthand figure of speech for "assume that things are considerably worse than they superficially appear to be." Eliezer responded by saying

Assuming that much of the worst isn't rational

I totally agree with Eliezer that literally assuming the worst is not rational. I thought that my intended meaning would be clear (because the literal meaning is obviously false), but in light of contextual cues that made it appear as though I had an agenda against SIAI my shorthand was prone to misinterpretation. I should have been precise about what my prior assumption is about charities that are not transparent and accountable, saying: "my prior assumption is that funding a given charity which is not transparent and accountable has slight positive expected value which is dwarfed by the positive expected value of funding the best transparent and accountable charities."

As Eliezer suggested, I also should have made it more clear what I consider to be an appropriate level of transparency and accountability for an existential risk reduction charity. After I read Yvain's comment referenced above, I made an attempt to explain what I had in mind by transparency and accountability in a pair of responses to him [1], [2], but I should have done this in the body of my main post before posting. Moreover, I should have preempted his remark:

Anti-TB charities can measure how much less TB there is per dollar invested; SIAI can't measure what percentage safer the world is, since the world-saving is still in basic research phase. You can't measure the value of the Manhattan Project in "cities destroyed per year" while it's still going on.

by citing Holden's tentative list of questions for existential risk reduction charities.

Mistake #8: Mentioning developing world aid charities in juxtaposition with existential risk reduction

In the original version of my Transparency and Accountability posting I said

I believe that at present GiveWell's top ranked charities VillageReach and StopTB are better choices than SIAI, even for donors like utilitymonster who take astronomical waste seriously and believe in the ideas expressed in the cluster of blog posts linked under Shut Up and multiply.

In fact, I meant precisely what I said and no more, but as Hanson says in Against Disclaimers, people presume that:

If you say you prefer option A to option B, you also prefer A to any option C.

Because I did not add a disclaimer, Airedale understood me to be advocating in favor of VillageReach and StopTB over all other available options. Those who know me well know that over the past six months I've been in the process of grappling with the question of which forms of philanthropy are most effective from a utilitarian perspective and that I've been searching for a good donation opportunity which is more connected with the long-term future of humanity than VillageReach's mission is. But it was unreasonable for me to assume that my readers would know where I was coming from. 

In a comment on the first of my sequence of postings orthonormal said:

whpearson mentioned this already, but if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute.

From the point of view of the typical LW poster it would have been natural for me to address orthonormal's remark in my brief discussion of the relative merits of charities for those who take astronomical waste seriously and I did not do so. This led some [1], [2], [3] to question my seriousness of purpose and further contributed to the appearance that I have an agenda against SIAI. Shortly after I made my post Carl Shulman commented saying:

The invocation of VillageReach in addressing those aggregative utilitarians concerned about astronomical waste here seems baffling to me.

After reading over his comment and others and thinking about them, I edited my post to avoid the appearance of favoring developing world aid over existential risk reduction, but the damage had already been done. Based on the original text of my posting and my track record of donating exclusively VillageReach, many LW posters have persistently understood me to have an agenda in favor of developing world aid and against existential risk reduction charities.

The original phrasing of my post made sense from my own point of view. I believe supporting GiveWell's recommended charities has high expected value because I believe that doing so strengthens a culture of effective philanthropy and that in the long run this will meaningfully lower existential risk. But my thinking here is highly non-obvious and it was unreasonable for me to expect that it would be evident to readers. It's easy to forget that others can't read our minds. I damaged my credibility by mentioning developing world aid charities in juxtaposition with existential risk reduction without offering careful explanation for why I was doing so.

My reference to developing world aid charities was also not effectiveness-oriented. As far as I know, most SIAI donors are not considering donating to developing world aid charities. As described under the heading "Mistake #3" above, I slipped up and let my desire for personal expression take precedence over actually getting things done. As I described in Missed Opportunities For Doing Well By Doing Good I personally had a great experience with discovering GiveWell and giving to VillageReach. Instead of carefully taking the time to get to know my audience, I simple-mindedly generalized from one example and erroneously assumed that my readers would be coming from a perspective similar to my own.


Conclusion:

My recent experience has given me heightened respect for the careful writing style of LW posters like Yvain and Carl Shulman. Writing in this style requires hard work and the ability to delay gratification, but it can happen that the cost is well worth it in the end. When one is writing for an audience that one doesn't know very well there's a substantial risk of being misinterpreted because one's readers do not have enough context to understand what one is driving at. This risk can be mitigated by taking the time to provide detailed background for one's readers and by taking great care to avoid making claims (whether explicit or implicit) that are too strong. In principle one can always qualify one's remarks later on, but it's important to remember that as komponisto said

First impressions really do matter

so that it's preferable to avoid being misunderstood the first time around. On the flip side it's important to remember that one may be misguided by one's own first impressions. There are LW posters who I now understand to be acting in good faith who I initially misunderstood to have a hostile agenda against me.

My recent experience was my first writing about a controversial subject in public and has been a substantive learning experience for me. I would like to thank the Less Wrong community for giving me this opportunity. I'm especially grateful to posters CarlShulman, Airedale, steven0461, Jordan, Komponisto, Yvain, orthonormal, Unknowns, Wei_Dai, Will_Newsome, Mitchell_Porter, rhollerith_dot_com, Eneasz, Jasen and PeerInfinity for their willingness to engage with me and help me understand why some of what I said and did was subject to misinterpretation. I look forward to incorporating the lessons that I've learned into my future communication practices.

New Comment
39 comments, sorted by Click to highlight new comments since:

Thank you for posting this. I did not attend very close to the other posts you wrote on SIAI . This one grabbed my attention because I am in the middle of preparing my first Less Wrong top level post, and would like it if people like it and I am a little anxious about how it will go. I think one thing I am going to do is post it and not read any of the comments for a week, or at least not post any follow-on comments for a week. I see many times people make an OK post or a post with only small flaws and then they jump into the discussion and look far worse in their comments than they did with their top post.

Craig: Please, please, for the love of Bayes, discuss your post in the new discussion area or on the Open Thread before you post it for the first time!

Nobody will plagiarize you, you'll get some good feedback and a better shot at getting promoted once you do post, and you won't risk karma at 10x that way.

Thank you for the suggestion. I have been working on the post off and on for over a month, since before the discussion area existed. So I had sort of decided on plunging into the top level post after re-writing and re-writing the sucker. I do not remember how I decided I did not want to discuss it ahead of time in the Open Thread. I like the post. I think it will be well received. It may not be. I am often surprised at how stuff I post on the internet is received; as often better than I was expecting as worse than I was expecting. It is often difficult to judge ahead of time how people are going to respond to what we offer. Still, it seems kind of mealy-mouthed or pussy-footing to not write it up as best as I can and just post it.

I understand how you feel about this, but I think most of the veterans would think more of a person who showed them respect by getting feedback before putting their first post on the main page. Certainly I would.

...I wonder if it would be possible to implement the following feature: the first post for a new account automatically goes to the Discussion page for a few days before it posts to the main site. If that were a known feature,would you be bothered by it?

Are first posts generally low quality? My first post was +50, and in my memory I recall most getting something like +10. Then again I do mostly remember just good posts.

My first post was embarrassingly bad. It was downvoted when new, and remains in the low single digits.

Should I resist the temptation to check it out?

I didn't delete it or anything. You can read it if you want. It's a thing I wrote and I tend to leave those where they lie, lest I go too far in the other direction and only have a four-day window of stuff I still like online at any given time.

Scanning back through the recent posts, I saw 4 or 5 first posts at 0 or lower in the last 2 months, out of about 20 first posts. (I have the feeling that there were a few more whose authors deleted them in the vain hopes of getting karma restored, but I could be wrong.)

Obviously, people who have been lurking for a while and hanging out with Less Wrongers in real life have a pretty good expected karma on their first post.

Obviously, people who have been lurking for a while and hanging out with Less Wrongers in real life have a pretty good expected karma on their first post.

And I would expect that people who are already veteran commenters when they post for the first time probably have even better prospects.

I think this comment highlights the distinction between popular and good.

High ranked posts are popular, good may or may not have anything to do with it.

Personally I find all this kowtowing to the old guard a bit distasteful. One of my favorite virtues of academia is the double blind submissions process. Perhaps there are similar approaches that could be taken here.

It won't be until the 15th of October so I may change my mind and follow your suggestion; and I do appreciate you taking the time to give me this recommendation. That is just how I am leaning now and why.

I have upvoted this as a good analysis of propaganda techniques (no offence intended). What you write seems generally true, however I am a little bit afraid that learning too much of the communication skills can actually hurt a discussion, since the content is diluted in signalling noise. It isn't a good thing when a criticism induces offended feelings in most of the criticised, on the other hand, it isn't a good thing either when the main message is nearly lost between disclaimers, or when it's never said because it's too hard to avoid all possible mistakes in formulation, context and precision. To strive for perfection is a noble thing to do, but, as Churchill said, it can paralyse you.

I agree it's possible to err too far in the direction of focus on putting on a good appearance. It's quite tricky to strike the right balance between directness and tact. The conversations that I had with LW posters after making my string of postings on SIAI convinced me that I had placed too little emphasis on tact in the context under discussion. But certainly there are situations where taking a direct approach is warranted.

The main message that I hope people take away from my top level posting is that it's important to provide context to conversation partners who have a worldview different from one's own. Often such context is of independent interest so the opportunity cost of providing background and context is low.

Would you provide an updated probability estimate for the statement "Eliezer's attempt to unilaterally build a Friendly AI that will go FOOM in collaboration with a group of a dozen or fewer people will succeed"? You've seen a lot of new evidence since you made the last estimate. (Original estimate, new wording from the last paragraph of #6)

In line with my remarks under "Mistake #6," I plan on gradually developing the background behind my thinking in a sequence of postings. This will give me the chance to provide others with appropriate context and refine my internal model according to the feedback that I get so that I can arrive at a more informed view.

Recurring to an earlier comment that I've made, I think that there's an issue of human inability to assign numerical probabilities which is especially pronounced when one is talking about small and unstable probabilities. So I'm not sure how valuable it would be for me to attempt to give a numerical estimate. But I'll think about answering your question after making some more postings.

I feel like you still haven't understood the main criticism of your posts. You have acknowledged every mistake except for having an incorrect conclusion. All the thousands of words you've written avoid confronting the main point, which is whether people should donate to SIAI. To answer this, we need four numbers:

  1. The marginal effect that donating a dollar to SIAI has on the probabilities of of friendly AI being developed, and of human extinction
  2. The utilities of friendly AI and of human extinction
  3. The utility of the marginal next-best use of money.

We don't need exact numbers, but we emphatically do need orders of magnitude. If you get the order magnitude of any one of 1-3 wrong, then your conclusion is shot. The problem is, estimating orders of magnitude is a hard skill; it can be learned, but it is not widespread. And if you don't have that skill, you cannot reason correctly about the topic.

So far, you have given exactly one order of magnitude estimate, and it was shot down as ridiculous by multiple qualified people. Since then, you have consistently refused to give any numbers whatsoever. The logical conclusion is that, like most people, you lack the order of magnitude estimation skill. And unfortunately, that means that you cannot speak credibly on questions where order of magnitude estimation is required.

The tone of the last paragraph seems uncalled for. I doubt that a unitary "order of magnitude estimation skill" is the key variable here. To put a predictive spin on this I doubt that you'd find a very strong correlation between results in a Fermi calculation contest and estimates of the above probabilities among elite hard sciences PhD students.

When someone says they're rethinking an estimate and don't want to give a number right now, I think that's respectable in the same way as someone who's considering a problem and refuses to propose solutions too soon. There's an anchoring effect that kicks in when you put down a number.

From my private communications with multifolaterose, I believe he's acting in good faith by refusing to assign a number, for essentially that reason.

The link to Human inability to assign numerical probabilities, and the distance into the future which he deferred the request, gave me the impression that it was a matter of not wanting to assign a number at all, not merely deferring it until later. Thank you for pointing out the more charitable interpretation; you seem to have some evidence that I don't.

Orthonormal correctly understands where I'm coming from. I feel that I have very poor information on the matter at hand and want to collect a lot more information before evaluating the cost-effectiveness of donating to SIAI relative to other charities. I fully appreciate your point that in the end it's necessary to make quantitative comparisons and plan on doing so after learning more.

I'll also say that I agree with rwallace's comment that rather than giving an estimate of the probability at hand, it's both easier and sufficient to give an estimate of

The relative magnitudes of the marginal effects of spending a dollar on X vs Y.

Thanks for articulating my thinking so accurately and concisely.

All the thousands of words you've written avoid confronting the main point, which is whether people should donate to SIAI.

I agree that my most recent post does not address the question of whether people should donate to SIAI.

So far, you have given exactly one order of magnitude estimate, and it was shot down as ridiculous by multiple qualified people. Since then, you have consistently refused to give any numbers whatsoever. The logical conclusion is that, like most people, you lack the order of magnitude estimation skill. And unfortunately, that means that you cannot speak credibly on questions where order of magnitude estimation is required.

There are many ways in which I could respond here, but I'm not sure how to respond because I'm not sure what your intent is. Is your goal to learn more from me, to teach me something new, to discredit me in the eyes of others, or something else?

Actually, my goal was to get you to give some numbers which to test whether you've really updated in response to criticism, or are just signalling that you have. I had threshold values in mind and associated interpretations. Unfortunately, it doesn't seem to have worked (I put you on the defensive instead), so the test is inconclusive.

Setting aside for the moment the other questions surrounding this topic, and addressing just your main point in this comment:

The fact of the matter is that we do not have the data to meaningfully estimate numbers like this, not even to an order of magnitude, not even to ten orders of magnitude, and it is best to admit this.

Fortunately, we don't need an order of magnitude to make meaningful decisions. What we really need to know, or at least try to guess with better than random accuracy, is:

  1. The sign (as opposed to magnitude) of the marginal effect of spending a dollar on X.

  2. The relative magnitudes of the marginal effects of spending a dollar on X vs Y.

Both of these are easier to at least coherently reason about, than absolute magnitudes.

The marginal effect that donating a dollar to SIAI has on the probabilities of of friendly AI being developed, and of human extinction.

P(eventual human extinction) looks enormous - since the future will be engineered. It depends on exactly what you mean, though. For example, is it still "extinction" if a future computer sucks the last million remaining human brains into the matrix. Or if it keeps their DNA around for the sake of its historical significance?

Also, what is a "friendly AI"? Say a future machine intelligence looks back on history - and tries do decide whether what happened was "friendly". Is there some decision process they could use to determine this? If so, what is it?

At any rate, the whole analysis here seems misconceived. The "extinction of all humans" could be awful - or wonderful - depending on the circumstances and on the perspective of the observer. Values are not really objective facts that can be estimated and agreed upon.

For example, is it still "extinction" if a future computer sucks the last million remaining human brains into the matrix. Or if it keeps their DNA around for the sake of its historical significance?

Or if all humans have voluntarily [1] changed into things we can't imagine?

[1] I sloppily assume that choice hasn't changed too much.

I am coming to this late as I'm just now catching up on recent Less Wrong posts. I am really surprised and happy to read this post, because it is unusual and pleasant for me to upwardly revise my opinion of someone so markedly. I'd seen your previous writing on LessWrong and decided, in a blunt emotivist summary: boo multifoliaterose. This post has reversed that opinion.

I have a question somewhat similar to something jimrandomh brought up - there was a passage in this post that indicated that you might have significantly changed your opinion of Yudkowsky and of SIAI overall, though you didn't really explore it in detail:

I had been unaware of some of the activities which SIAI staff have been engaging in; activities which I judge to have significant positive expected value. I had also misinterpreted some of SIAI's policies in ways that made them look worse than they now appear to me to be.

I'd like to read more about this, if your opinion has changed significantly/interestingly.

Thanks. Replied by email.

Note that I wasn't aware of the concurrence of the Singularity Summit and my post at that time either. It all happened very quickly after some comments regarding the Roko incident. My post was actually a copy and past job from previous comments I haven't thought through much. I just thought at that time that the comments would be more fruitful as a top-level post, i.e. more answers. Until then I always restrained from posting anything on LW because I knew I lacked the smarts and patience to think things through. It was all because of the Roko incident really, but never put to be a serious attack towards LW or the SIAI. After all I still think this is the most awesome place on the web :-)

^Just some annotation because I've been mentioned in the OP. Again, sorry for any inconveniences, I'm just not thinking enough before talking, that is I'm not taking things too serious that should be taken serious. As I said before, I'm a donor and not a foe. Phew ;-)

At least initially, presenting the factors that lead one to one's conclusion before presenting one's conclusion projects a lower level of confidence in one's conclusion than presenting one's conclusions before presenting the factors that lead one to these conclusions. Altering one's order of presentation in this fashion is not equivalent to lying and moreover is actually conducive to rational discourse.

Stating one's absurd beliefs is not the problem. The problem is expecting those statements to have arguing power (and you don't appear to appreciate this principle). Arguments should always be statements with which people having the discussion already agree, so that persuasion occurs by building a proof of the target belief in opponent's mind from and by the beliefs already present. Not every statement one believes in is a valid argument, not even close. I'd even go as far as to claim that what you believe doesn't matter at all, only what the opponent believes does. Some of the opponent's beliefs might be about your beliefs, but that's a second-order effect of no fundamental importance.

At each point there are bound to be statements which are evaluated differently by different people, it's a simple fact to accept. Irrationality is signaled by inability to follow a rational argument, not by having a property of holding incorrect beliefs. Where one was expected to have been exposed to such arguments, and hasn't changed their mind, this is informative, but not before the first argument is presented.

Stating one's absurd beliefs is not the problem.

My experience communicating with you is that you've been very receptive to engaging with positions that you strongly disagree with but my experience communicating with most people is that the mere act of stating a position which appears to be absurd to them lowers one's credibility in their eyes for the reasons discussed under the heading "mistake #6" above.

The problem is expecting those statements to have arguing power (and you don't appear to appreciate this principle).

With the Aumann agreement theorem in mind I think that the mere statement of a belief can carry very slight argumentative power. But the qualifier very slight is key here and I basically agree with you.

At each point there are bound to be statements which are evaluated differently by different people, it's a simple fact to accept. Irrationality is signaled by inability to follow a rational argument, not by having a property of holding incorrect beliefs.

I completely agree.

[-][anonymous]00

Right, so the reason I discounted the two articles in that series that I read, was that you didn't cite any evidence. This article doesn't address that, but it does itself use lots of evidence, which I appreciate.