I intended Leveling Up in Rationality to communicate this:

Despite worries that extreme rationality isn't that great, I think there's reason to hope that it can be great if some other causal factors are flipped the right way (e.g. mastery over akrasia). Here are some detailed examples I can share because they're from my own life...

But some people seem to have read it and heard this instead:

I'm super-awesome. Don't you wish you were more like me? Yay rationality!

This failure (on my part) fits into a larger pattern of the Singularity Institute seeming too arrogant and (perhaps) being too arrogant. As one friend recently told me:

At least among Caltech undergrads and academic mathematicians, it's taboo to toot your own horn. In these worlds, one's achievements speak for themselves, so whether one is a Fields Medalist or a failure, one gains status purely passively, and must appear not to care about being smart or accomplished. I think because you and Eliezer don't have formal technical training, you don't instinctively grasp this taboo. Thus Eliezer's claim of world-class mathematical ability, in combination with his lack of technical publications, make it hard for a mathematician to take him seriously, because his social stance doesn't pattern-match to anything good. Eliezer's arrogance as evidence of technical cluelessness, was one of the reasons I didn't donate until I met [someone at SI in person]. So for instance, your boast that at SI discussions "everyone at the table knows and applies an insane amount of all the major sciences" would make any Caltech undergrad roll their eyes; your standard of an "insane amount" seems to be relative to the general population, not relative to actual scientists. And posting a list of powers you've acquired doesn't make anyone any more impressed than they already were, and isn't a high-status move.

So, I have a few questions:

 

  1. What are the most egregious examples of SI's arrogance?
  2. On which subjects and in which ways is SI too arrogant? Are there subjects and ways in which SI isn't arrogant enough?
  3. What should SI do about this?

 

The Singularity Institute's Arrogance Problem
New Comment
307 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]790

(I hope this doesn't come across as overly critical because I'd love to see this problem fixed. I'm not dissing rationality, just its current implementation. You have declared Crocker's Rules before, so I'm giving you an emotional impression of what your recent rationality propaganda articles look like to me, and I hope that doesn't come across as an attack, but something that can be improved upon.)

I think many of your claims of rationality powers (about yourself and other SIAI members) look really self-congratulatory and, well, lame. SIAI plainly doesn't appear all that awesome to me, except at explaining how some old philosophical problems have been solved somewhat recently.

You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?! Frankly, the only publicly visible person who strikes me as having some awesome powers is you, and from reading CSA, you seem to have had high productivity (in writing and summarizing) before you ever met LW.

Maybe there are all these awesome feats I just never get to see because I'm not at SIAI, but I've seen similar levels of confidence in your methods and wea... (read more)

Thought experiment

If the SIAI was a group of self interested/self deceiving individuals, similar to new age groups, who had made up all this stuff about rationality and FAI as a cover for fundraising what different observations would we expect?

I would expect them to:

  • 1- Never hire anybody or hire only very rarely
  • 2- Not release information about their finances
  • 3- Avoid high-profile individuals or events
  • 4- Laud their accomplishments a lot without producing concrete results
  • 5- Charge large amounts of money for classes/training
  • 6- Censor dissent on official areas, refuse to even think about the possibility of being a cult, etc.
  • 7- Not produce useful results

SIAI does not appear to fit 1 (I'm not sure what the standard is here), certainly does not fit 2 or 3, debatably fits 4, and certainly does not fit 5 or 6. 7 is highly debatable but I would argue that the Sequences and other rationality material are clearly valuable, if somewhat obtuse.

6private_messaging
That goes for self interested individuals with high rationality, purely material goals, and very low self deception. The self deceived case, on the other hand, is the people whose self interest includes 'feeling important' and 'believing oneself to be awesome' and perhaps even 'taking a shot at becoming the saviour of mankind'. In that case you should expect them to see awesomeness in anything that might possibly be awesome (various philosophy, various confused texts that might be becoming mainstream for all we know, you get the idea), combined with absence of anything that is definitely awesome and can't be trivial (a new algorithmic solution to long standing well known problem that others worked on, practically important enough, etc).
[-]FAWS140

I wouldn't have expected them to hire Luke. If Luke was a member all along and everything just planned to make them look more convincing that would imply a level of competence at such things that I'd expect all round better execution (which would have helped more than slightly improved believability from faking lower level of PR etc competence).

3RobertLumley
I would not expect their brand of rationality to work in my own life. Which it does.

What evidence have you? Lots of New Age practitioners claim that New Age practices work for them. Scientology does not allow members to claim levels of advancement until they attest to "wins".

For my part, the single biggest influence that "their brand of rationality" (i.e. the Sequences) has had on me may very well be that I now know how to effectively disengage from dictionary arguments.

8FiftyTwo
Even if certain rationality techniques are effective that's separate from the claims about the rest of the organisation. Similar to the early level Scientology classes being useful social hacks but the overall structure less so.
0Blueberry
They are? Do you have a reference? I thought they were weird nonsense about pointing to things and repeating pairs of words and starting at corners of rooms and so on.
2RobertLumley
Markedly increased general satisfaction in life, better success at relationships, both intimate and otherwise, noticing systematic errors in thinking, etc. I haven't bothered to collect actual data (which wouldn't do much good since I don't have pre-LW data anyway) but I am at least twice as happy with my life as I have been in previous years.
9Karmakaiser
This is the core issue with rationality at present. Until and unless some intrepid self data collectors track their personal lives post sequences then we have a collection of smart people who post nice anecdotes. I admit that, like you, I didn't have the presence of mind to start collecting data as I can't keep a diary current. But without real data we will have continued trouble convincing people that this works.
3RobertLumley
I was thinking the other day that I desperately wished I had written down my cached thoughts (and more importantly, cached feelings) about things like cryonics (in particular), politics, or [insert LW topic of choice here] before reading LW so that I could compare them now. I don't think I had ever really thought about cryonics, or if I had, I had a node linking it to crazy people. Actually, now that I think about it it's not true. I remember thinking about it once when I first started in research, and we were unfreezing lab samples, and considering whether or not cryonicists have a point. I don't remember what I felt about it though.
4Karmakaiser
One of the useful things about the internet is it's record keeping abilities and humans natural ability to comment on things they know nothing about. Are you aware of being on record on a forum or social media site pre LW on issues that LW has dealt with?
2RobertLumley
Useful and harmful. ;-) Yes, to an extent. I've had Facebook for about six years (I found HPMOR about 8 months ago, and LW about 7?) but I deleted the majority of easily accessible content and do not post anything particularly introspective on there. I know, generally, how I felt about more culturally popular memes, what I really wish I remember though are things like cryonics or the singularity, to which I never gave serious consideration before LW. Edit: At one point, I wrote a program to click the "Older posts" button on Facebook so I could go back and read all of my old posts, but it's been made largely obsolete by the timeline feature.
1gwern
It's probably a bit late for many attitudes of mine, but I have made a stab at this by keeping copies of all my YourMorals.org answers and listing other psychometric data at http://www.gwern.net/Links#profile (And I've retrospectively listed in an essay the big shifts that I can remember; hopefully I can keep it up to date and obtain a fairly complete list over my life.)
0gwern
IIRC, wasn't a bunch of data-collection done for the Bootcamp attendees, which was aimed at resolving precisely that issue?

I appreciate the tone and content of your comment. Responding to a few specific points...

You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?!

There are many things we aren't (yet) good at. There are too many things about which to check the science and test things and update. In fact, our ability to collaborate successfully with volunteers on things has greatly improved in the last month, in part because we implemented some advice from the GWWC gang, who are very good at collaborating with volunteers.

the only publicly visible person who strikes me as having some awesome powers is you

Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.

Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them... (read more)

[-][anonymous]540

I don't think you're taking enough of an outside view. Here's how these accomplishments look to "regular" people:

CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team.

You wrote something 11 years ago, which you now consider defunct and still is not a mainstream view in any field.

The Sequences are simply awesome.

You wrote series of esoteric blog posts that some people like.

And he did manage to write the most popular Harry Potter fanfic of all time.

You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?

Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them and Carl Shulman. But you'd have to visit the Bay Area, and we can't afford to have him do nothing but conversations, anyway. If you want a taste, you can read his comment history, which consists of him writing the exactly correct thing to say in almost every comment he's made for the past several years.

You have a guy who is pretty smart. Ok...

The point ... (read more)

You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?

It's actually been incredibly useful to establishing the credibility of every x-risk argument that I've had with people my age.

"Have you read Harry Potter and the Methods of Rationality?"

"YES!"

"Ah, awesome!"

merriment ensues

topic changes to something about things that people are doing

"So anyway the guy who wrote that also does...."

[-][anonymous]210

Again, take the outside outside view. The kind of conversation you described only happens with people who have read HPMoR--just telling people about the fic isn't really impressive. (Especially if we are talking about the 90+% of the population who know nothing about fanfiction.) Ditto for the Sequences, they're only impressive after the fact. Compare this to publishing a number of papers in a mainstream journal, which is a huge status boost even to people who have never actually read the papers.

3atucker
I don't think that that kind of status converts nearly as well as establishing a niche of people who start adopting your values, and then talking to them.
[-][anonymous]170

Perhaps not, but Luke was using HPMoR as an example of an accomplishment that would help negate accusations of arrogance, and for the majority of "regular" people, hearing that SIAI published journal articles does that better than hearing that they published Harry Potter fanfiction.

4pjeby
The majority of "regular" people don't know what journals are; apart from the Wall Street Journal and the New England Journal of Medicine, they mostly haven't heard of any. If asked about journal articles, many would say, "you mean like a blog?" (if younger) or think you were talking about a diary or a newspaper (if older). They have, however, heard of Harry Potter. ;-)
1private_messaging
You know what would be awesome, it's if Eliezer wrote original Harry Potter to obtain funding for the SI. Seriously, there is a plenty of people whom I would not pay to work on AI, who accomplished far more than anyone at SI, in the more relevant fields.

Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.

I wasn't aware of Google's AGI team accepting CFAI. Is there a link of organizations that consider the Friendly AI issue important?

I wasn't even aware of "Google's AGI team" . .

0lukeprog
Update: please see here.
1beoShaffer
Building off of this and my previous comment, I think that more and more visible rationality verification could help. First off, opening your ideas up to tests generally reduces perceptions of arrogance. Secondly, successful results would have similar effects to the technical accomplishments I mentioned above. (Note I expect wide scale rationality verification to increase the amount of pro-LW evidence that can be easily presented to outsiders, not for it to increase my own confidence. Thus this isn't in conflict with the conservation of evidence.)
-2Solvent
Eliezer is pretty amazing. He's written some brilliant fiction, and some amazing stuff in the Sequences, plus CFAI, CEV, and TDT.

My #1 suggestion, by a big margin, is to generate more new formal math results.

My #2 suggestion is to communicate more carefully, like Holden Karnofsky or Carl Shulman. Eliezer's tone is sometimes too preachy.

SI is arrogant because it pretends to be even better than science, while failing to publish in significant scientific papers. If this does not seem like a pseudoscience or cult, I don't know what does.

So please either stop pretending to be so great or prove it! For starters, it is not necessary to publish a paper about AI; you can choose any other topic.

No offense; I honestly think you are all awesome. But there are some traditional ways to prove one's skills, and if you don't accept the challenge, you look like wimps. Even if the ritual is largely a waste of time (all signals are costly), there are thousands of people who have passed it, so a group of x-rational gurus should be able to use their magical powers and do it in five minutes, right?

Yeah. The best way to dispel the aura of arrogance is to actually accomplish something amazing. So, SIAI should publish some awesome papers, or create a powerful (1) AI capable of some impressive task like playing Go (2), or end poverty in Haiti (3), or something. Until they do, and as long as they're claiming to be super-awesome despite the lack of any non-meta achievements, they'll be perceived as arrogant.

(1) But not too powerful, I suppose.
(2) Seeing as Jeopardy is taken.
(3) In a non-destructive way.

0Regex
2016 update: Go is now also taken. Impressive tasks remaining as (t-> inf) approaches zero! If not to AI or heat death, we're doomed to having already done everything amazing.
2DuncanS
There are indeed times you can get the right answer in five minutes (no, seconds), but it still takes the same length of time as for everyone else to write the thing up into a paper.

How much is that "same length of time"? Hours? Days? If 5 days of work could make LW acceptable in scientific circles, is it not worth doing? It is better to complain why oh why more people don't treat SI seriously?

Can some part of that work be oursourced? Just write the outline of the answer, then find some smart guy in India and pay him like $100 to write it? Or if money is not enough for people who could write the paper well, could you bribe someone by offering them co-authorship? Graduate students have to publish in papers anyway, so if you give them a complete solution, they should be happy to cooperate.

Or set up a "scientific wiki" on SI site, where the smartest people will write the outlines of their articles, and the lesser brains can contribute by completing the texts.

These are my solutions, which seem rather obvious to me. It is not sure they would work, but I guess trying them is better than do nothing. Could a group of x-rational gurus find seven more solutions in five minutes?

From outside, this seems like: "Yeah, I totally could do it, but I will not. Now explain me why are people, who can do it, percieved like more skilled than me?" -- "Because they showed everyone they can do it, duh."

3Benya
Upvoted for clearly pointing out the tradeoff (yes publicly visible accomplishments that are easy to recognize as accomplishments may not be the most useful thing to work on, but not looking awesome is a price paid for that and needs to be taken into account in deciding what's useful). However, I want to point out that if I heard that an important paper was written by someone who was paid $100 and doesn't appear on the author list, my crackpot/fraud meter (as related to the people on the author list) would go ping-Ping-PING, whether that's fair or not. This makes me worry that there's still a real danger of SIAI sending the wrong signals to people in academia (for similar but different reasons than in the OP).

in combination with his lack of technical publication

I think it would help for EY to submit more of his technical work for public judgment. Clear proof of technical skill in a related domain makes claims less likely to come off as arrogant. For that matter it also makes people more willing to accept actions that they do perceive as arrogant.

The claim made that donating to the SIAI is the charity donation with the highest expected return* always struck me as rather arrogant, though I can see the logic behind it.

The problem is firstly that its an extremely self serving statement, (equivalent to "giving us money is the best thing you can ever possibly do") even if true its credibility is reduced by the claim coming from the same person who would benefit from it.

Secondly it requires me to believe a number of claims which individually require a burden of proof, and gain more from the conjunction. Including: "Strong AI is possible," "friendly AI is possible," "The actions of the SIAI will significantly effect the results of investigations into FAI," and "the money I donate will significantly improve the effectiveness of the SIAI's research" (I expect the relationship between research efffectiveness and funding isn't linear). All of which I only have your word for.

Thirdly, contrast this with other charities who are known to be very effective and can prove it, and whose results affect presently suffering people (e.g. Against malaria).

Caveat, I'm not arguing any of the clai... (read more)

-1lukeprog
I feel like I've heard this claimed, too, but... where? I can't find it. Here is the latest fundraiser; which line were you thinking of? I don't see it.
[-][anonymous]170

I feel like I've heard this claimed, too, but... where? I can't find it.

Question #5.

7lukeprog
Yup, there it is! Thanks. Eliezer tends to be more forceful on this than I am, though. I seem to be less certain about how much x-risk is purchased by donating to SI as opposed to donating to FHI or GWWC (because GWWC's members are significantly x-risk focused). But when this video was recorded, FHI wasn't working as much on AI risk (like it is now), and GWWC barely existed. I am happy to report that I'm more optimistic about the x-risk reduction purchased per dollar when donating to SI now than I was 6 months ago. Because of stuff like this. We're getting the org into better shape as quickly as possible.

because GWWC's members are significantly x-risk focused

Where is this established? As far as I can tell, one cannot donate "to" GWWC, and none of their recommended charities are x-risk focused.

2Thrasymachus
(Belated reply): I can only offer anecdotal data here, but as one of the members of GWWC, many of the members are interested. Also, listening to the directors, most of them are also interested in x-risk issues. You are right in that GWWC isn't a charity (although it is likely to turn into one), and their recommendations are non-x-risk. The rationale for recommending charities is dependent on reliable data: and x-risk is one of those things where a robust "here's more much more likely happy singularity will be if you give to us" analysis looks very hard.
1Barry_Cotter
Neither can I but IIRC Anna Salamon did an EU calculation which came up with eight lives saved per dollar donated, no doubt impressively caveated and with error bars aplenty.
7lukeprog
I think you're talking about this video. Without watching it again, I can't remember if Anna says that SI donation could buy something like eight lives per dollar, or whether donation to x-risk reduction in general could buy something like eight lives per dollar.
[-]Shmi410

Having been through a Physics grad school (albeit not of a Caltech caliber), I can confirm that lack of (a real or false) modesty is a major red flag, and a tell-tale of a crank. Hawking does not refer to the black-hole radiation as Hawking radiation, and Feynman did not call his diagrams Feynman diagrams, at least not in public. A thorough literature review in the introduction section of any worthwhile paper is a must, unless you are Einstein, or can reference your previous relevant paper where you dealt with it.

Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org (cs.DM or similar), properly referenced and formatted to conform with the prevailing standard (probably LaTeXed), and submit them for conference proceedings and/or into peer-reviewed journals. Anything less would be less than rational.

[-]XiXiDu510

Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org...

Even Greg Egan managed to copublish papers on arxiv.org :-)

ETA

Here is what John Baez thinks about Greg Egan (science fiction author):

He's incredibly smart, and whenever I work with him I feel like I'm a slacker. We wrote a paper together on numerical simulations of quantum gravity along with my friend Dan Christensen, and not only did they do all the programming, Egan was the one who figured out a great approximation to a certain high-dimensional integral that was the key thing we were studying. He also more recently came up with some very nice observations on techniques for calculating square roots, in my post with Richard Elwes on a Babylonian approximation of sqrt(2). And so on!

That's actually what academics should be saying about Eliezer Yudkowsky if it is true. How does an SF author manage to get such a reputation instead?

5gwern
That actually explains a lot for me - when I was reading The Clockwork Rocket, I kept thinking to myself, 'how the deuce could anyone without a physics degree follow the math/physics in this story?' Well, here's my answer - he's still up on his math, and now that I check, I see he has a BS in math too.
4arundelo
I thought this comment by Egan said something interesting about his approach to fiction: (I enjoyed Incandescence without taking notes. If, while I was reading it, I had been quizzed on the direction words, I would have done OK but not great.) Edit: The other end of the above link contains spoilers for Incandescence. To understand the portion I quoted, it suffices to know that some characters in the story have their own set of six direction words (instead of "up", "down", "north", "south", "east", and "west"). Edit 2: I have a bit of trouble keeping track of characters in novels. When I read on my iPhone, I highlight characters' names as they're introduced, so I can easily refresh my memory when I forgot who someone is.
1gwern
Yes, he's pretty unapologetic about his elitism - if you aren't already able to follow his concepts or willing to do the work so you can, you are not his audience and he doesn't care about you. Which isn't a problem with Incandescence, whose directions sound perfectly comprehensible, but is much more of an issue with TCR, which builds up an entire alternate physics.
2Pablo
What's the source for that quote? A quick Google search failed to yield any relevant results.
2XiXiDu
Private conversation with John Baez (I asked him if I am allowed to quote him on it). You can ask him to verify it.
2mwengler
To be fair Eliezer gets good press from Professor Robin Hanson. This is one of the main bulwarks of my opinion of Eliezer and SIAI. (Other bulwarks include having had the distinct pleasure of meeting lukeprog at a few meetups and meeing Anna at the first meetup I ever attended. Whatever else is going on at SIAI, there is a significant amount of firepower in the rooms).
6ScottMessick
Yes, and isn't it interesting to note that Robin Hanson sought his own higher degrees for the express purpose of giving his smart contrarian ideas (and way of thinking) more credibility?
0Viliam_Bur
By publishing his results at a place where scientists publish.
3[anonymous]
I agree, wholeheartedly, of course -- except the last sentence. There's a not very good argument that the opportunity cost of EY learning LaTeX is greater than the opportunity cost of having others edit afterward. There's also a not very good argument that EY doesn't lose terribly much from his lack of academic signalling credentials. Together these combine to a weak argument that the current course is in line with what EY wants, or perhaps would want if he knew all the relevant details.
[-]Maelin400

For someone who knows how to program, learning LaTeX to a perfectly serviceable level should take at most one day's worth of effort, and most likely it would be spread diffusely throughout the using process, with maybe a couple of hours' dedicated introduction to begin with.

It is quite possible that, considering the effort required to find an editor and organise for that editor to edit an entire paper into LaTeX, compared with the effort required to write the paper in LaTeX in the first place, the additional effort cost of learning LaTeX may in fact pay for itself after less than one whole paper. It's very unlikely that it would take more than two.

8dbaupp
And one gets all the benefits of a text document while writing it (grep-able, version control, etc.). (It should be noted that if one is writing LaTeX, it is much easier with a LaTeX specific editor (or one with an advanced LaTeX mode))
5lukeprog
I'm not at all confident that writing (or collaborating on) academic papers is the most x-risk-reducing way for Eliezer to spend his time.
8Bugmaster
Speaking of arrogance and communication skills: your comment sounds very similar to, "Since Eliezer is always right about everything, there's no need for him to waste time on seeking validation from the unwashed academic masses, who likely won't comprehend his profound ideas anyway". Yes, I am fully aware that this is not what you meant, but this is what it sounds like to me.
2lukeprog
Interesting. That is a long way from what I meant. I just meant that there are many, many ways to reduce x-risk, and it's not at all clear that writing papers is the optimal way to do so, and it's even less clear that having Eliezer write papers is so.
6Bugmaster
Yes, I understood what you meant; my comment was about style, not substance. Most people (myself included, to some non-trivial degree) view publication in academic journals as a very strong test of one's ideas. Once you publish your paper (or so the belief goes), the best scholars in the field will do their best to pick it apart, looking for weaknesses that you might have missed. Until that happens, you can't really be sure whether your ideas are correct. Thus, by saying "it would be a waste of Eliezer's time to publish papers", what you appear to be saying is, "we already know that Eliezer is right about everything". And by combining this statement with saying that Eliezer's time is very valuable because he's reducing x-risk, you appear to be saying that either the other academics don't care about x-risk (in which case they're clearly ignorant or stupid), or that they would be unable to recognize Eliezer's x-risk-reducing ideas as being correct. Hence, my comment above. Again, I am merely commenting on the appearance of your post, as it could be perceived by someone with an "outside view". I realize that you did not mean to imply these things.
3wedrifid
That really isn't what Luke appears to be saying. It would be fairer to say "a particularly aggressive reader could twist this so that it means..." It may sometimes be worth optimising speech such that it is hard to even willfully misinterpret what you say (or interpret based on an already particularly high prior for 'statement will be arrogant') but this is a different consideration to trying not to (unintentionally) appear arrogant to a neutral audience.
5JoshuaZ
For what it is worth, I had an almost identical reaction when reading the statement.
0Bugmaster
Fair enough; it's quite possible that my interpretation was too aggressive.
0wedrifid
It's the right place for erring on the side of aggressive interpretation. We've been encouraged (and primed) to do so!
8mwengler
I think the evolution is towards a democratization of the academic process. One could say the cost of academia was so high in the middle ages that the smart move was filtering the heck out of participants to at least have a chance of maximizing utility of those scarce resources. And now those costs have been driven to nearly zero, with the largest cost being the sigal-to-noise problem: how does a smart person choose what to look at. I think putting your signal into locations where the type of person you would like to attract gather is the best bet. Web publication of papers is one. Scientific meetings is another. I don't think you can find an existing institution more chock full of people you would like to be involved with than the Math-Science-Engineering academic institutions. Market in them. If there is no one who can write an academic math paper that is interested enough in EY's work to translate it into something somewhat recognizable as valuable by his peers, than the emperor is wearing no clothes. As a PhD calltech applied physicist who has worked with optical interferometers both in real life and in QM calculations (published in journals), EY's stuff on interferometer is incomprehensible to me. I would venture to say "wrong" but I wouldn't go that far without discussing it in person with someone. Robin Hanson's endorsement of EY is the best credential he has for me. I am a caltech grad and I love Hanson's "freakonomics of the future" approach, but his success at being associated wtih great institutions is not a trivial factor in my thinking I am right to respect him. Get EY or lukeprog or Anna or someone else from SIAI on Russ Roberts' podcast. Robin has done it. Overall, SIAI serves my purposes pretty well as is. But I tend to view SIAI as pushing a radical position about some sort of existential risk and beliefs about AI, where the real value is probably not quite as radical as what they push. An example from history would be BF Skinner and behaviori
4Adele_L
Similarly, the fact that Scott Aaronson and John Baez seem to take him seriously are significant credentials he has for me.
8[anonymous]
I thought we were talking about the view from outside the SIAI?
8lukeprog
Clearly, Eliezer publishing technical papers would improve SI's credibility. I'm just pointing out that this doesn't mean that publishing papers is the best use of Eliezer's time. I wasn't disagreeing with you; just making a different point.
[-]Shmi180

Publishing technical papers would be one of the better uses of his time, editing and formatting them probably is not. If you have no volunteers, you can easily find a starving grad student who would do it for peanuts.

3[anonymous]
Well, they've got me for free.
0Shmi
You must be allergic to peanuts.
0[anonymous]
Not allergic, per se. But I doubt they would willingly throw peanuts at me, unless perhaps I did a trick with an elephant.
0[anonymous]
I'm not disagreeing with you either.
2Shmi
I would see what the formatting standards are in the relevant journals and find a matching document class or a LyX template. Someone other than Eliezer can certainly do that.
[-][anonymous]360

I've asked around a bit, and we can't recall when exactly EY claimed "world-class mathematical ability". As far as I can remember, he's been pretty up-front about wishing he were better at math. I seem to remember him looking for a math-savvy assistant at one point.

If this is the case, it sounds like EY has a Chuck Norris problem, i.e., his mythos has spread beyond its reality.

Yes. At various times we've considered hiring EY an advanced math tutor to take him to the next level more quickly. He's pretty damn good at math but he's not Terence Tao.

5[anonymous]
So did you ask your friend where this notion of theirs came from?
0Kaj_Sotala
I have a memory of EY boasting about how he learned to solve high school/college level math before the age of ten, but I couldn't track down where I read that.
5Kaj_Sotala
Ah, here is the bit I was thinking about:
2Desrtopa
I don't remember the post, but I'm pretty sure I remember that Eliezer described himself as a coddled math prodigy, not having made to train seriously and compete, and so he lags behind math prodigies who were made to hone their skills that way, like Marcello.
1mwengler
its in the waybackmachine link in the post you are commenting on!
0Kaj_Sotala
I hadn't read that link before, so it was somewhere else, too.

I've asked around a bit, and we can't recall when exactly EY claimed "world-class mathematical ability". As far as I can remember, he's been pretty up-front about wishing he were better at math. I seem to remember him looking for a math-savvy assistant at one point.

I too don't remember that he ever claimed to have remarkable math ability. He's said that he was "spoiled math prodigy" (or something like that), meaning that he showed precocious math ability while young, but he wasn't really challenged to develop it. Right now, his knowledge seems to be around the level of a third- or fourth-year math major, and he's never claimed otherwise. He surely has the capacity to go much further (as many people who reach that level do), but he hasn't even claimed that much, has he?

7private_messaging
This leaves one wondering how the hell would one be this concerned about the AI risk but not study math properly? How the hell can one go on Bayesian this and Bayesian that but not study? How can one trust one's intuitions about how much computational power is needed for AGI, and not want to improve those intuitions? I've speculated elsewhere that he would likely be unable to implement general Bayesian belief propagation graph or even know what is involved (its NP complete problem in general and the accuracy of solution is up to heuristics. Yes, heuristics. Biased ones, too). That's very bad when it comes to understanding rationality, as you will start going on with maxims like "update all your beliefs" etc, which look outright stupid to e.g. me (I assure you I can implement Bayesian belief propagation graph), and triggers my 'its another annoying person that talks about things he has no clue about' reflex. Talking about Bayesian this and Bayesian that, one should better know mathematics very well. Because in practice all those equations get awfully hairy on things like graphs in general (not just trees). If you don't know relevant math very well and you call yourself Bayesian, you are professing a belief in belief. If you do not make a claim of extreme mathematical skills and knowledge, and you go on Bayesian this and that, other people will have to assume extreme mathematical skills and knowledge out of politeness.
5David_Gerard
Yes.

There's a phrase that the tech world uses to describe the kind of people you want to hire: "smart, and gets things done." I'm willing to grant "smart", but what about the other one?

The sequences and HPMoR are fantastic introductory/outreach writing, but they're all a few years old at this point. The rhetoric about SI being more awesome than ever doesn't square with the trend I observe* in your actual productivity. To be blunt, why are you happy that you're doing less with more?

*I'm sure I don't know everything SI has actually done in the last year, but that's a problem too.

To educate myself, I visited the SI site and read your December progress report. I should note that I've never visited the SI site before, despite having donated twice in the past two years. Here are my two impressions:

  • Many of these bullet points are about work in progress and (paywalled?) journal articles. If I can't link it to my friends and say, "Check out this cool thing," I don't care. Tell me what you've finished that I can share with people who might be interested.
  • Lots on transparency and progress reporting. In general, your communication strategy seems focused on people who already are aware of and follow SIAI closely. These people are loud, but they're a small minority of your potential donors.
5lukeprog
Of course, things we finished before December 2011 aren't in the progress report. E.g. The Singularity and Machine Ethics. Not really. We're also working on many things accessible to a wider crowd, like Facing the Singularity and the new website. Once the new website is up we plan to write some articles for mainstream magazines and so on.
6Paul Crowley
"smart and gets things done" I think originates with Joel Spolsky: http://www.joelonsoftware.com/articles/fog0000000073.html

I agree with what has been said about the modesty norm of academia; I speculate that it arises because if you can avoid washing out of the first-year math courses, you're already one or two standard deviations above average, and thus you are in a population in which achievements that stood out in a high school (even a good one) are just not that special. Bragging about your SAT scores, or even your grades, begins to feel a bit like bragging about your "Participant" ribbon from sports day. There's also the point that the IQ distribution in a good physics department is not Gaussian; it is the top end of a Gaussian, sliced off. In other words, there's a lower bound and an exponential frequency decay from there. Thus, most people in a physics department are on the lower end of their local peer group. I speculate that this discourages bragging because the mass of ordinary plus-two-SDs doesn't want to be reminded that they're not all that bright.

However, all that aside: Are academics the target of this blog, or of lukeprog's posts? Propaganda, to be effective, should reach the masses, not the elite - although there's something to be said for "Get the elite and the masses ... (read more)

4Karmakaiser
So if I could restate the norms of academia vis a vi modesty: "Do the impossible. But don't forget to shut up as well." Is that a fair characterization?

Well, no, I don't think so. Most academics do not work on impossible problems, or think of this as a worthy goal. So it should be more like "Do cool stuff, but let it speak for itself".

Moderately related: I was just today in a meeting to discuss a presentation that an undergraduate student in our group will be giving to show her work to the larger collaboration. On her first page she had

Subject

Her name

Grad student helping her

Dr supervisor no 1

Dr supervisor no 2

And to start off our critique, supervisor 1 mentioned that, in the subculture of particle physics, it is not the custom to list titles, at least for internal presentations. (If you're talking to a general audience the rules change.) Everyone knows who you are and what you've done! Thus, he gave the specific example that, if you mention "Leon", everyone knows you speak of Leon Lederman, the Nobel-Prize winner. But as for "Dr Lederman", pff, what's a doctorate? Any idiot can be a doctor and many idiots (by physics standards, that is) are; if you're not a PhD it's at least assumed that you're a larval version of one. It's just not a very unusual accomplishment in these circles. To have your first ... (read more)

2asr
I have seen this elsewhere in the academy as well. At many elite universities, professors are never referred to as Dr-so-and-so. Everybody on the faculty has a doctorate. They are Professor-so-and-so. At some schools, I'm told they are referred to as Mr or Mrs-so-and-so. Similar effect: "we know who's cool and high-status and don't need to draw attention to it."
1jsteinhardt
Wow, I didn't even consciously recognize this convention, although I would definitely never, for instance, add titles to the author list of a paper. So I seem to have somehow picked it up without explicitly deciding to.

I've reccommended this before, I think.

I think that you should get Eliezer to say the accurate but arrogant sounding things, because everyone already knows he's like that. You should yourself, Luke, be more careful about maintaining a humble opinion.

If you need people to say arrogant things, make them ghost-write for Eliezer.

Personally, I think that a lot of Eliezer's arrogance is deserved. He's explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people's problems. CFAI was way ahead of its time, as TDT still is. So he can feel smug. He's got a reputation as an arrogant eccentric genius anyway.

But the rest of the organisation should try to be more careful. You should imitate Carl Shulman rather than Eliezer.

I think having people ghost-write for Eliezer is a very anti-optimum solution in the long run. It removes integrity from the process. SI would become insufficiently distinguishable from Scientology or a political party if it did this.

Eliezer is a real person. He is not "big brother" or some other fictional figure head used to manipulate the followers. The kind of people you want, and have, following SI or lesswrong will discount Eliezer too much when (not if) they find out he has become a fiction employed to manipulate them.

5Solvent
Yeah, I kinda agree. I was slightly exaggerating my position for clarity. Maybe not full on ghost-writing. But occasionally, having someone around who can say what he wants without further offending anybody can be useful. Like, part of the reason the Sequences are awesome is that he personally claims that they are. Also, Eliezer says: So occasionally SingInst needs to say something that sounds arrogant. I just think that when possible, Eliezer should say those things.

He's explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people's problems.

As a curiosity, what would the world look like if this were not the case? I mean, I'm not even sure what it means for such a sentence to be true or false.

Addendum: Sorry, that was way too hostile. I accidentally pattern-matched your post to something that an Objectivist would say. It's just that, in professional philosophy, there does not seem to be a consensus on what a "problem of philosophy" is. Likewise, there does not seem to be a consensus on what a solution to one would look like. It seems that most "problems" of philosophy are dismissed, rather than ever solved.

Here are examples of these philosophical solutions. I don't know which of these he solved personally, and which he simply summarized others' answer to:

  • What is free will? Ooops, wrong question. Free will is what a decision-making algorithm feels like from the inside.

  • What is intelligence? The ability to optimize things.

  • What is knowledge? The ability to constrain your expectations.

  • What should I do with the Newcomb's Box problem? TDT answers this.

...other examples include inventing Fun theory, using CEV to make a better version of utilitarianism, and arguing for ethical injunctions using TDT.

And so on. I know he didn't come up with these on his own, but at the least he brought them all together and argued convincingly for his answers in the Sequences.

I've been trying to figure out these problems for years. So have lots of philosophers. I have read these various philosophers' proposed solutions, and disagreed with them all. Then I read Eliezer, and agreed with him. I feel that this is strong evidence that Eliezer has actually created something of value.

9J_Taylor
I admire the phrase "what an algorithm feels like from the inside". This is certainly one of Yudkowsky's better ideas, if it is one of his. I think that one can see the roots of it in G.E.B. Still, this may well count as something novel. Nonetheless, Yudkowsky is not the first compatibilist. One could define the term in such a way. I tend to take a instrumentalist view on intelligence. However, "the ability to optimize things" may well be a thing. You may as well call it intelligence, if you are so inclined. This, nonetheless, may not be a solution to the question "what is intelligence?". It seems as though most competent naturalists have moved passed the question. I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate? I have absolutely no knowledge of the history of Newcomb's problem. I apologize. Further apologies for the following terse statements: I don't think Fun theory is known by academia. Also, it looks like, at best, a contemporary version of eudaimonia. The concept of CEV is neat. However, I think if one were to create an ethical version of the pragmatic definition of truth, "The good is the end of inquiry" would essentially encapsulate CEV. Well, as far as one can encapsulate a complex theory with a brief statement. TDT is awesome. Predicted by the superrationality of Hofstadter, but so what? I don't mean to discount the intelligence of Yudkowsky. Further, it is extremely unkind of me to be so critical of him, considering how much he has influenced my own thoughts and beliefs. However, he has never written a "Two Dogmas of Empiricism" or a Naming and Necessity. Philosophical influence is something that probably can only be seen, if at all, in retrospect. Of course, none of this really matters. He's not trying to be a good philosopher. He's trying to save the world.
3Solvent
Okay, the Gettier problem. I can explain the Gettier problem, but it's just my explanation, not Eliezer's. The Gettier problem is pointing out problems with the definition of knowledge as justified true belief. "Justified true belief" (JTB) is an attempt at defining knowledge. However, it falls into the classic problem with philosophy of using intuition wrong, and has a variety of other issues. Lukeprog discusses the weakness of conceptual analysis here. Also, it's only for irrational beings like humans that there is a distinction between "justified' and 'belief.' An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn't usefully define knowledge anyway. Incidentally, I just re-read this post, which says: So perhaps Eliezer didn't create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Leibniz and calculus, really.
6asr
I am skeptical that AIs will do pure Bayesian updates -- it's computationally intractable. An AI is very likely to have beliefs or behaviors that are irrational, to have rational beliefs that cannot be effectively proved to be such, and no reliable way to distinguish the two.
7XiXiDu
Isn't this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question. Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI's might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That's a real danger in my opinion.
4wedrifid
Honest answer: Yes. For example 1 utilon per paperclip.
3lessdazed
I appreciate the example. It will serve me well. Upvoted.
3J_Taylor
I am aware of the Gettier Problem. I just do not see the phrase, "the ability to constrain one's expectations" as being a proper conceptual analysis of "knowledge." If it were a conceptual analysis of "knowledge", it probably would be vulnerable to Gettieriziation. I love Bayesian epistemology. However, most Bayesian accounts which I have encountered either do away with knowledge-terms or redefine them in such a way that it entirely fails to match the folk-term "knowledge". Attempting to define "knowledge" is probably attempting to solve the wrong problem. This is a significant weakness of traditional epistemology. I am not entirely familiar with Eliezer's history. However, he is clearly influenced by Hofstadter, Dennet, and Jaynes. From just the first two, one could probably assemble a working account which is, weaker than, but has surface resemblances to, Eliezer's espoused beliefs. Also, I have never heard of Hooke independently inventing calculus. It sounds interesting however. Still, are you certain you are not thinking of Leibniz?
0Solvent
ooops, fixed. I'll respond to the rest of what you said later.
0MatthewBaker
To quickly sum up Newcomb's problem, Its a question of probability where choosing the more "rational" thing to do will result in a great deal less currency to a traditional probability based decision theory. TDT takes steps to avoid getting stuck 2 boxing, or choosing the more rational of the two choices while applying in the vast majority of other situations.
0J_Taylor
Apologies, I know what Newcomb's problem is. I simply do not know anything about its history and the history of its attempted solutions.
0lessdazed
...efficiently. Most readers will misinterpret that. The question for most was/is instead "Formally, why should I one-box on Newcomb's problem?"
[-]Shmi260

What should SI do about this?

I think that separating instrumental rationality from the Singularity/FAI ideas will help. Hopefully this project is coming along nicely.

8lukeprog
Yes, we're full steam ahead on this one.

(I was going to write a post on 'why I'm skeptical about SIAI', but I guess this thread is a good place to put it. This was written in a bit of a rush - if it sounds like I am dissing you guys, that isn't my intention.)

I think the issue isn't so much 'arrogance' per se - I don't think many of your audience would care about accurate boasts - but rather your arrogance isn't backed up with any substantial achievement:

You say you're right on the bleeding edge in very hard bits of technical mathematics ("we have 30-40 papers which could be published on decision theory" in one of lukeprogs Q&As, wasn't it?), yet as far as I can see none of you have published anything in any field of science. The problem is (as far as I can tell) you've been making the same boasts about all these advances you are making for years, and they've never been substantiated.

You say you've solved all these important philosophical questions (Newcomb, Quantum mechanics, Free will, physicalism, etc.), yet your answers are never published, and never particularly impress those who are actual domain experts in these things - indeed, a complaint I've heard commonly is that Lesswrong just simply misundersta... (read more)

3lukeprog
No, that wasn't it. I said 30-40 papers of research. Most of that is strategic research, like Carl Shulman's papers, not decision theory work. Otherwise, I almost entirely agree with your comments.

I think Eli as being the main representative of SI, should be more careful of how he does things, and resist his natural instinct to declare people stupid (-> Especially <- if he's basically right)

Case in point: http://www.sl4.org/archive/0608/15895.html That could have been handled more politically and with more face-saving for the victim. Now you have this guy and at least one "friend" with loads of free time going around putting down anything associated with Eliezer or SI on the Internet. For 5 minutes of extra thinking and not typing this could have been largely avoided. Eli has to realize that he's in a good position to needlessly hurt his (and our) own causes.

Another case in point was handling the Roko affair. There is doing the right thing, but you can do it without being an asshole (also IMO the "ownership" of LW policies is still an unresolved issue, but at least it's mostly "between friends"). If something like this needs to be done Eli needs to pass the keyboard to cooler heads.

9Nick_Tarleton
Note: happened five years ago
8Multiheaded
Certainly anyone building a Serious & Official image for themselves should avoid mentioning any posteriors not of the probability kind in their public things.
3Dr_Manhattan
Already noted, and I'm guessing the situation improved. But it's still a symptom of a harmful personality trait.

Why don't SIAI researchers decide to definitively solve some difficult unsolved mathematics, programming, or engineering problem as proof of their abilities?

Yes it would waste time that could have been spent on AI-related philosophy but would unambiguously support the competency of SIAI.

9WrongBot
You mean, like decision theory? Both Timeless Decision Theory (which Eliezer developed) and Updateless Decision Theory (developed mostly by folks who are now SI Research Associates) are groundbreaking work in the field, and both are currently being written up for publication, I believe.
<