The Singularity Institute's Arrogance Problem

I intended Leveling Up in Rationality to communicate this:

Despite worries that extreme rationality isn't that great, I think there's reason to hope that it can be great if some other causal factors are flipped the right way (e.g. mastery over akrasia). Here are some detailed examples I can share because they're from my own life...

But some people seem to have read it and heard this instead:

I'm super-awesome. Don't you wish you were more like me? Yay rationality!

This failure (on my part) fits into a larger pattern of the Singularity Institute seeming too arrogant and (perhaps) being too arrogant. As one friend recently told me:

At least among Caltech undergrads and academic mathematicians, it's taboo to toot your own horn. In these worlds, one's achievements speak for themselves, so whether one is a Fields Medalist or a failure, one gains status purely passively, and must appear not to care about being smart or accomplished. I think because you and Eliezer don't have formal technical training, you don't instinctively grasp this taboo. Thus Eliezer's claim of world-class mathematical ability, in combination with his lack of technical publications, make it hard for a mathematician to take him seriously, because his social stance doesn't pattern-match to anything good. Eliezer's arrogance as evidence of technical cluelessness, was one of the reasons I didn't donate until I met [someone at SI in person]. So for instance, your boast that at SI discussions "everyone at the table knows and applies an insane amount of all the major sciences" would make any Caltech undergrad roll their eyes; your standard of an "insane amount" seems to be relative to the general population, not relative to actual scientists. And posting a list of powers you've acquired doesn't make anyone any more impressed than they already were, and isn't a high-status move.

So, I have a few questions:

 

  1. What are the most egregious examples of SI's arrogance?
  2. On which subjects and in which ways is SI too arrogant? Are there subjects and ways in which SI isn't arrogant enough?
  3. What should SI do about this?

 

307 comments, sorted by
magical algorithm
Highlighting new comments since Today at 8:57 AM
Select new highlight date
Moderation Guidelinesexpand_more

(I hope this doesn't come across as overly critical because I'd love to see this problem fixed. I'm not dissing rationality, just its current implementation. You have declared Crocker's Rules before, so I'm giving you an emotional impression of what your recent rationality propaganda articles look like to me, and I hope that doesn't come across as an attack, but something that can be improved upon.)

I think many of your claims of rationality powers (about yourself and other SIAI members) look really self-congratulatory and, well, lame. SIAI plainly doesn't appear all that awesome to me, except at explaining how some old philosophical problems have been solved somewhat recently.

You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?! Frankly, the only publicly visible person who strikes me as having some awesome powers is you, and from reading CSA, you seem to have had high productivity (in writing and summarizing) before you ever met LW.

Maybe there are all these awesome feats I just never get to see because I'm not at SIAI, but I've seen similar levels of confidence in your methods and weak results in the New Age circles I hung out in years ago. Your beliefs are much saner, but as long as you can't be more effective than them, I'll always have a problem taking you seriously.

In short, as you yourself noted, you lack a Tim Ferriss. Even for technical skills, there isn't much I can point at and say, "holy shit, this is amazing and original, I wanna learn how to do that, have all my monies!".

(This has little to do with the soundness of SIAI's claims about Intelligence Explosion etc., though, but it does decrease my confidence that conclusions reached through your epistemic rationality are to be trusted if the present results seem so lacking.)

Thought experiment

If the SIAI was a group of self interested/self deceiving individuals, similar to new age groups, who had made up all this stuff about rationality and FAI as a cover for fundraising what different observations would we expect?

I would expect them to:

  • 1- Never hire anybody or hire only very rarely
  • 2- Not release information about their finances
  • 3- Avoid high-profile individuals or events
  • 4- Laud their accomplishments a lot without producing concrete results
  • 5- Charge large amounts of money for classes/training
  • 6- Censor dissent on official areas, refuse to even think about the possibility of being a cult, etc.
  • 7- Not produce useful results

SIAI does not appear to fit 1 (I'm not sure what the standard is here), certainly does not fit 2 or 3, debatably fits 4, and certainly does not fit 5 or 6. 7 is highly debatable but I would argue that the Sequences and other rationality material are clearly valuable, if somewhat obtuse.

That goes for self interested individuals with high rationality, purely material goals, and very low self deception. The self deceived case, on the other hand, is the people whose self interest includes 'feeling important' and 'believing oneself to be awesome' and perhaps even 'taking a shot at becoming the saviour of mankind'. In that case you should expect them to see awesomeness in anything that might possibly be awesome (various philosophy, various confused texts that might be becoming mainstream for all we know, you get the idea), combined with absence of anything that is definitely awesome and can't be trivial (a new algorithmic solution to long standing well known problem that others worked on, practically important enough, etc).

I wouldn't have expected them to hire Luke. If Luke was a member all along and everything just planned to make them look more convincing that would imply a level of competence at such things that I'd expect all round better execution (which would have helped more than slightly improved believability from faking lower level of PR etc competence).

I would not expect their brand of rationality to work in my own life. Which it does.

What evidence have you? Lots of New Age practitioners claim that New Age practices work for them. Scientology does not allow members to claim levels of advancement until they attest to "wins".

For my part, the single biggest influence that "their brand of rationality" (i.e. the Sequences) has had on me may very well be that I now know how to effectively disengage from dictionary arguments.

Even if certain rationality techniques are effective that's separate from the claims about the rest of the organisation. Similar to the early level Scientology classes being useful social hacks but the overall structure less so.

the early level Scientology classes being useful social hacks

They are? Do you have a reference? I thought they were weird nonsense about pointing to things and repeating pairs of words and starting at corners of rooms and so on.

Markedly increased general satisfaction in life, better success at relationships, both intimate and otherwise, noticing systematic errors in thinking, etc.

I haven't bothered to collect actual data (which wouldn't do much good since I don't have pre-LW data anyway) but I am at least twice as happy with my life as I have been in previous years.

I haven't bothered to collect actual data

This is the core issue with rationality at present. Until and unless some intrepid self data collectors track their personal lives post sequences then we have a collection of smart people who post nice anecdotes. I admit that, like you, I didn't have the presence of mind to start collecting data as I can't keep a diary current. But without real data we will have continued trouble convincing people that this works.

I was thinking the other day that I desperately wished I had written down my cached thoughts (and more importantly, cached feelings) about things like cryonics (in particular), politics, or [insert LW topic of choice here] before reading LW so that I could compare them now. I don't think I had ever really thought about cryonics, or if I had, I had a node linking it to crazy people.

Actually, now that I think about it it's not true. I remember thinking about it once when I first started in research, and we were unfreezing lab samples, and considering whether or not cryonicists have a point. I don't remember what I felt about it though.

One of the useful things about the internet is it's record keeping abilities and humans natural ability to comment on things they know nothing about. Are you aware of being on record on a forum or social media site pre LW on issues that LW has dealt with?

It's probably a bit late for many attitudes of mine, but I have made a stab at this by keeping copies of all my YourMorals.org answers and listing other psychometric data at http://www.gwern.net/Links#profile

(And I've retrospectively listed in an essay the big shifts that I can remember; hopefully I can keep it up to date and obtain a fairly complete list over my life.)

Useful and harmful. ;-)

Yes, to an extent. I've had Facebook for about six years (I found HPMOR about 8 months ago, and LW about 7?) but I deleted the majority of easily accessible content and do not post anything particularly introspective on there. I know, generally, how I felt about more culturally popular memes, what I really wish I remember though are things like cryonics or the singularity, to which I never gave serious consideration before LW.

Edit: At one point, I wrote a program to click the "Older posts" button on Facebook so I could go back and read all of my old posts, but it's been made largely obsolete by the timeline feature.

Until and unless some intrepid self data collectors track their personal lives post sequences then we have a collection of smart people who post nice anecdotes

IIRC, wasn't a bunch of data-collection done for the Bootcamp attendees, which was aimed at resolving precisely that issue?

I appreciate the tone and content of your comment. Responding to a few specific points...

You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?!

There are many things we aren't (yet) good at. There are too many things about which to check the science and test things and update. In fact, our ability to collaborate successfully with volunteers on things has greatly improved in the last month, in part because we implemented some advice from the GWWC gang, who are very good at collaborating with volunteers.

the only publicly visible person who strikes me as having some awesome powers is you

Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.

Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them and Carl Shulman. But you'd have to visit the Bay Area, and we can't afford to have him do nothing but conversations, anyway. If you want a taste, you can read his comment history, which consists of him writing the exactly correct thing to say in almost every comment he's made for the past several years.

Aaaaaaaaaand now Carl will slap me for setting expectations too high. But I don't think I'm exaggerating that much. Maybe I'll get by with another winky-face.

;)

I don't think you're taking enough of an outside view. Here's how these accomplishments look to "regular" people:

CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team.

You wrote something 11 years ago, which you now consider defunct and still is not a mainstream view in any field.

The Sequences are simply awesome.

You wrote series of esoteric blog posts that some people like.

And he did manage to write the most popular Harry Potter fanfic of all time.

You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?

Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them and Carl Shulman. But you'd have to visit the Bay Area, and we can't afford to have him do nothing but conversations, anyway. If you want a taste, you can read his comment history, which consists of him writing the exactly correct thing to say in almost every comment he's made for the past several years.

You have a guy who is pretty smart. Ok...

The point I'm trying to make is, muflax's diagnosis of "lame" isn't far off the mark. There's nothing here with the ability to wow someone who hasn't heard of SIAI before, or to encourage people to not be put off by arguments like the one Eliezer makes in the Q&A.

You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?

It's actually been incredibly useful to establishing the credibility of every x-risk argument that I've had with people my age.

"Have you read Harry Potter and the Methods of Rationality?"

"YES!"

"Ah, awesome!"

merriment ensues

topic changes to something about things that people are doing

"So anyway the guy who wrote that also does...."

Again, take the outside outside view. The kind of conversation you described only happens with people who have read HPMoR--just telling people about the fic isn't really impressive. (Especially if we are talking about the 90+% of the population who know nothing about fanfiction.) Ditto for the Sequences, they're only impressive after the fact. Compare this to publishing a number of papers in a mainstream journal, which is a huge status boost even to people who have never actually read the papers.

I don't think that that kind of status converts nearly as well as establishing a niche of people who start adopting your values, and then talking to them.

Perhaps not, but Luke was using HPMoR as an example of an accomplishment that would help negate accusations of arrogance, and for the majority of "regular" people, hearing that SIAI published journal articles does that better than hearing that they published Harry Potter fanfiction.

for the majority of "regular" people, hearing that SIAI published journal articles does that better than hearing that they published Harry Potter fanfiction

The majority of "regular" people don't know what journals are; apart from the Wall Street Journal and the New England Journal of Medicine, they mostly haven't heard of any. If asked about journal articles, many would say, "you mean like a blog?" (if younger) or think you were talking about a diary or a newspaper (if older).

They have, however, heard of Harry Potter. ;-)

You know what would be awesome, it's if Eliezer wrote original Harry Potter to obtain funding for the SI.

Seriously, there is a plenty of people whom I would not pay to work on AI, who accomplished far more than anyone at SI, in the more relevant fields.

Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.

I wasn't aware of Google's AGI team accepting CFAI. Is there a link of organizations that consider the Friendly AI issue important?

Building off of this and my previous comment, I think that more and more visible rationality verification could help. First off, opening your ideas up to tests generally reduces perceptions of arrogance. Secondly, successful results would have similar effects to the technical accomplishments I mentioned above. (Note I expect wide scale rationality verification to increase the amount of pro-LW evidence that can be easily presented to outsiders, not for it to increase my own confidence. Thus this isn't in conflict with the conservation of evidence.)

In short, as you yourself noted, you lack a Tim Ferriss. Even for technical skills, there isn't much I can point at and say, "holy shit, this is amazing and original, I wanna learn how to do that, have all my monies!".

Eliezer is pretty amazing. He's written some brilliant fiction, and some amazing stuff in the Sequences, plus CFAI, CEV, and TDT.

My #1 suggestion, by a big margin, is to generate more new formal math results.

My #2 suggestion is to communicate more carefully, like Holden Karnofsky or Carl Shulman. Eliezer's tone is sometimes too preachy.

SI is arrogant because it pretends to be even better than science, while failing to publish in significant scientific papers. If this does not seem like a pseudoscience or cult, I don't know what does.

So please either stop pretending to be so great or prove it! For starters, it is not necessary to publish a paper about AI; you can choose any other topic.

No offense; I honestly think you are all awesome. But there are some traditional ways to prove one's skills, and if you don't accept the challenge, you look like wimps. Even if the ritual is largely a waste of time (all signals are costly), there are thousands of people who have passed it, so a group of x-rational gurus should be able to use their magical powers and do it in five minutes, right?

[-]Bugmaster
16 points