Cross-posted here.

(The Singularity Institute maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried SI staff members.)

Thanks to the generosity of several major donors, every donation to the Singularity Institute made now until January 20t (deadline extended from the 5th) will be matched dollar-for-dollar, up to a total of $115,000! So please, donate now!

Now is your chance to double your impact while helping us raise up to $230,000 to help fund our research program.

(If you're unfamiliar with our mission, please see our press kit and read our short research summary: Reducing Long-Term Catastrophic Risks from Artificial Intelligence.)

Now that Singularity University has acquired the Singularity Summit, and SI's interests in rationality training are being developed by the now-separate CFAR, the Singularity Institute is making a major transition.  Most of the money from the Summit acquisition is being placed in a separate fund for a Friendly AI team, and therefore does not support our daily operations or other programs.

For 12 years we've largely focused on movement-building — through the Singularity Summit, Less Wrong, and other programs. This work was needed to build up a community of support for our mission and a pool of potential researchers for our unique interdisciplinary work.

Now, the time has come to say "Mission Accomplished Well Enough to Pivot to Research." Our community of supporters is now large enough that qualified researchers are available for us to hire, if we can afford to hire them. Having published 30+ research papers and dozens more original research articles on Less Wrong, we certainly haven't neglected research. But in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research.

Accomplishments in 2012

Future Plans You Can Help Support

In the coming months, we plan to do the following:

  • As part of Singularity University's acquisition of the Singularity Summit, we will be changing our name and launching a new website.
  • Eliezer will publish his sequence Open Problems in Friendly AI.
  • We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for many of our core materials, to make them more accessible: The Sequences, 2006-2009, Facing the Singularity, and The Hanson-Yudkowsky AI Foom Debate.
  • We will publish several more research papers, including "Responses to Catastrophic AGI Risk: A Survey" and a short, technical introduction to timeless decision theory.
  • We will set up the infrastructure required to host a productive Friendly AI team and try hard to recruit enough top-level math talent to launch it.

(Other projects are still being surveyed for likely cost and strategic impact.)

We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed using either PayPal or Google Checkout. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or

$115,000 of total matching funds has been provided by Edwin Evans, Mihaly Barasz, Rob Zahra, Alexei Andreev, Jeff Bone, Michael Blume, Guy Srinivasan, and Kevin Fischer.

I will mostly be traveling (for AGI-12) for the next 25 hours, but I will try to answer questions after that.

New to LessWrong?

New Comment
113 comments, sorted by Click to highlight new comments since: Today at 5:07 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I donated 20,000$ now, in addition to 110,000$ earlier this year.

Thanks very much!!

Holy pickled waffles on a pogo stick. Thanks, dude.

Is there anything you're willing to say about how you acquired that dough? My model of you has earned less in a lifetime.

I value my free time far too much to work for a living. So your model is correct on that count. I had planned to be mostly unemployed with occasional freelance programming jobs, and generally keep costs down.

But then a couple years ago my hobby accidentally turned into a business, and it's doing well. "Accidentally" because it started with companies contacting me and saying "We know you're giving it away for free, but free isn't good enough for us. We want to buy a bunch of copies." And because my co-founder took charge of the negotiations and other non-programming bits, so it still feels like a hobby to me.

Both my non-motivation to work and my willingness to donate a large fraction of my income have a common cause, namely thinking of money in far-mode, i.e. not alieving The Unit of Caring on either side of the scale.

Yeah, I know exactly who you are, I just didn't want to bust privacy or drop creepy hints. I didn't know that VideoLAN projects were financially independent of each other, so that explains where profit comes from. It's just that I didn't expect two guys in a basement to make that much, and you're too young (and didn't have much income before anyway) to have significant savings. So they're more money in successful codecs than I guessed.

you're too young (and didn't have much income before anyway) to have significant savings.

Err, I haven't yet earned as much from the lazy entrepreneur route as I would have if I had taken a standard programming job for the past 7 years (though I'll pass that point within a few months at the current rate). So don't go blaming my cohort's age if they haven't saved and/or donated as much as me. I'm with Rain in spluttering at how people can have an income and not have money.

I don't, either -- possibly because I've never been in real economic hardships; I think if I had grown up in a poorer family I probably would. (I do try to be frugal because so far I've lived almost exclusively on my parents' income and it seems unfair towards them to waste their money, though.)
(At the time of this comment) 27 karma for a $20k donation, 13 karma for $250, 9 karma for $20 (and a joke) ... something's amiss with the karma-$ currency exchange rate!

Under the assumption that being rewarded with karma can motivate someone to make a donation, but if they make a donation, they do not respond to karma as an incentive when deciding how much to donate, then upvoting any donation is the best policy for maximizing money to SI. I'm not sure how realistic that model is, but it seems intuitive to me.

It might motivate someone to donate $20 rather than $5 if there is a karma difference; probably not $20000 rather than $20, though.

What do you expect to happen? We don't have enough users giving karma for donation to sustain a linear exchange rate in the [$20, $20000] range. Unless, I suppose, we give up any attempt at fine resolution over the [$1, $500] range.

In practice, what most people are probably doing is picking a threshold (possibly $0) beyond which they give karma for a donation. This could be improved: you could pick a large threshold beyond which you give 1 karma, and give fractional karma (by flipping a biased coin) below that threshold. However, if the large threshold were anywhere close to $20000, and your fractional karma scales linearly, then you would pretty much never give karma to the other donations.

Edit: after doing some simulations, I'm no longer sure the fractional approach is an improvement. It gives interesting graphs, though!

If we knew the Singularity Institute's approximate budget, we could fix this by assuming log-utility in money, but this is complicated.

Reversed scope insensitivity?

"No, she wouldn't say anything to me about Lucius afterwards, except to stay away from him. So during the Incident at the Potions Shop, while Professor McGonagall was busy yelling at the shopkeeper and trying to get everything under control, I grabbed one of the customers and asked them about Lucius."

Draco's eyes were wide again. "Did you really?"

Harry gave Draco a puzzled look. "If I lied the first time, I'm not going to tell you the truth just because you ask twice."

Nice quote.

"Really?" is more polite to say than "I find that hard to believe, can you provide confirming evidence" or "[citation needed]", though. Also, sometimes people actually will say "No, I was kidding" if you ask them.

Or "Oops, I accidentally typed an extra zero. Twice."
That is unlikely owing to the placement of the commas.
No, that just makes it worse, because 20,00$ could be referring to donating 20 dollars.
Ah, right. I had forgotten that some people use commas where I would expect periods. Adding an extra zero twice is still somewhat unlikely, though. My current hypotheses about the distribution of LW users make it more plausible that the tail of high income can afford fairly large donations.

There is a largely innocuous conversation below this comment which has been banned in its entirety. Who did this? Why?


I continue to donate $1000 a month, and intend to reduce my retirement savings next year so I can donate more.

That's a hell of a gamble, kid. Rock on.


The singularity is my retirement plan.

-- tocomment, in a Hacker News post

The quantum lottery is my retirement plan, my messy messy retirement plan.
I'm glad I'm not the only one that thinks like that. :)

I donated 650$ and will donate the same amount to the CFAR fundraiser.


Check in the mail for $3k

(Took me long enough.)

Now give me my karma.

Thanks very much!!

I have been donating $100 monthly on a subscription payment and will continue to do so.

Easier on the cash-flow than a lump donation. More fuzzies per year, too.

I just donated $250. Can't afford as much as last year; I switched to a lower-paying job that makes me happier.

Just donated 400 €.

My new year's resolution is tithing, to be split roughly half-in-half between "serious" causes and things like supporting my favorite webcomics/fansubbers/whatever. As part of the former, I decided to add 1000 € to the above donation.
Thanks so much!

Sent in 100€. Merry Newtonmas!



I am looking forward the the ebooks. I hope you'll provide them in ePub format, for those of us who prefer that. [I was pleased to donate $40, which should soon be matched by my employer as part of the employee-match program, thus getting me double-matched!]

I donated $20, roughly the price of a cheap hardcover novel.

Still donating 500 a month.

Five cheers for this! Those who are steadily donating should get applause every time.

Just donated $500 (with the Singularity credit card, so it's really more like $505 ^_^).


I donated 250$.

Update: No, I apparently did not. For some reason the transfer from Google Checkout got rejected, and now PayPal too. Does anyone have an idea what might've gone wrong? I've a Hungarian bank account. My previous SI donations were fine, even with the same credit card if I recall correctly, and I'm sure that my card is still prefectly valid.


I'm having the same problem. I used the card to buy modafinil yesterday, which might raise a red flag in fraud detection software? But if you're having it too, I'd update in the direction of it being a problem on SIAI's end.

Has anyone successfully donated since Kutta posted?

edit - Amazon is declining my card as well.

edit 2 - It's sorted out now, just donated £185.

I'm looking into this now, can you send me an email at so we can share any further details necessary to work out the problem?
Email sent.

After investigating the issue, it proved to be a problem on Kutta's side, not ours.

Thanks for your effort. I'll contact my bank.

I just verified that donations in general are working via PayPal and Google Checkout. We'll investigate this specific issue to see where the problem is.


Ok I think I just set up a $1000 monthly.

This is great news. Thanks to Edwin Evans, Mihaly Barasz, Rob Zahra, Alexei Andreev, Jeff Bone, Michael Blume, Guy Srinivasan, and Kevin Fischer for providing matching funds!

I have some money that I was saving for something like this, but I also just saw Eliezer's (very convincing) request for CFAR donations yesterday and heard a rumor that SIAI was trying to get people to donate to CFAR because they needed it more.

This seems weird to me because I would expect that with SIAI's latest announcement they have shifted from waterline-raising/community-building to more technical areas where CFAR success would be of less help to them, but I'd be very interested in hearing from an SIAI higher-up whether they really want my money or whether they would prefer I give it to CFAR instead.

1) In the long run, for CFAR to succeed, it has to be supported by a CFAR donor base that doesn't funge against SIAI money. I expect/hope that CFAR will have a substantially larger budget in the long run than SIAI. In the long run, then, marginal x-risk minimizers should be donating to SIAI.

2) But since CFAR is at a very young and very vital stage in its development and has very little funding, it needs money right now. And CFAR really really needs to succeed for SIAI to be viable in the long-term.

So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...


...SIAI has previously supported CFAR, is probably going to make a loan to CFAR in the future, and therefore it doesn't matter as much exactly which organization you give to right now, except that if one maxes out its matching funds you probably want to donate to the other until it also maxes...


...even the judgment about exactly where a marginal dollar spent is more valuable is, necessarily, extremely uncertain to me. My own judgment favors CFAR at the current margins, but it's a very tough decision.... (read more)

Thank you; that helps clarify the issue for me. Since people who know more seem to think it's a tossup and SIAI motivates me more, I gave $250 to them.

And CFAR really really needs to succeed for SIAI to be viable in the long-term.

That's an extremely strong claim. Is that actually your belief? Not merely that CFAR success would be useful to SIAI success? There is no alternate plan for SIAI to be successful that doesn't rely on CFAR?

I have backup plans, but they tend to look a lot like "Try founding CFAR again."

I don't know of any good way to scale funding or core FAI researchers for SIAI without rationalists. There's other things I could try, and would if necessary try, but I spent years trying various SIAI-things before LW started actually working. Just because I wouldn't give up no matter what, doesn't mean there wouldn't be a fairly large chunk of success-probability sliced off if CFAR failed, and a larger chunk of probability sliced off if I couldn't make any alternative to CFAR work.

I realize a lot of people think it shouldn't be impossible to fund SIAI without all that rationality stuff. They haven't tried it. Lots of stuff sounds easy if you haven't tried it.

Thankyou Eliezer. I'm fascinated by the reasoning and analysis that you're hinting at here. It helps puts the decisions you and SIAI have made in perspective. Could you give a ballpark estimate of how much of the importance of successful rationality spin offs is based on expectations of producing core FAI researchers versus producing FAI funding?
4Eliezer Yudkowsky11y
I've tried less hard to get core FAI researchers than funding. I suspect that given sufficient funding produced by magic, it would be possible to solve the core-FAI-researchers issue by finding the people and talking to them directly - but I haven't tried it!

How much money would you need magicked to allow you to shed fundraising and infrastructure, etc, and just hire and hole up with a dream team of hyper-competent maths wonks? Restated, at which set amount would SIAI be comfortably able to aggressively pursue its long-term research?

He once mentioned a figure of US $10 million / year. Feels like he's made a similar remark more recently, but it didn't show in my brief search.
Is this still your view?

[SI has now] shifted from waterline-raising/community-building to more technical areas where CFAR success would be of less help to them

Remember that the original motivation for the waterline-raising/community-building stuff at SI was specifically to support SI's narrower goals involving technical research. Eliezer wrote in 2009 that "after years of bogging down [at SI] I threw up my hands and explicitly recursed on the job of creating rationalists," because Friendly AI is one of those causes that needs people to be "a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts."

So, CFAR's own efforts at waterline-raising and community-building should end up helping SI in the same way Less Wrong did, even though SI won't capture all or even most of that value, and even though CFAR doesn't teach classes on AI risk.

I've certainly found it to be the case that on average, people who get in contact with SI via an interest in rationality tend to be more useful than people who get in contact with SI via an interest in transhumanism or the singularity. (Though there are plenty of exceptions! E.g. Edwin Evans, Ri... (read more)

Why is that your response? More precisely... do you actually believe that I should base my charitable giving on my level of excitement? Or do you assert that despite not believing it for some reason?


Oh, right...

Basically, it's because I think both organizations Do Great Good with marginal dollars at this time, but the world is too uncertain to tell whether marginal dollars do more good at CFAR or SI. (X-risk reducers confused by this statement probably have a lower estimate of CFAR's impact on x-risk reduction than I do.) For normal humans who make giving decisions mostly by emotion, giving to the one they're most excited about should cause them to give the maximum amount they're going to give. For weird humans who make giving decisions mostly by multiplication, well, they've already translated "whichever organization you're most excited to support" into "whichever organization maximizes my expected utility [at least, with reference to the utility function which represents my philanthropic goals]."

I mailed a check for $20,000.

I'm excited about the pivot to research.

I just donated $1,000... to CFAR. Does that still count?

Thanks! That counts for CFAR's drive.

I assume a mailed cheque will work?

This post made me super excited. I was just thinking about donating before I found this. Now I really have to. Thanks for the initiative.

Certainly. Please see the instructions under 'Donate by Check' on the donate page. Thanks very much!

I've just donated 500 Canadian dollars to the Singularity Institute (at the moment, 1 Canadian dollar = 1.01 US dollar).


Gave 200 $ this time.


As part of Singularity University's acquisition of the Singularity Summit, we will be changing our name and ...

OK, this is big news. Don't know how I missed this one.

In general, I'd say that people's desire to be anonymous should be respected unless there's a very good reason to override it, and solving a puzzle is not a very good reason.

Anyway, he pretty much admitted who he is now.

I have donated $30 in payback for a free dinner hosted by the Melbourne LessWrong meetup.


Donated $150. One more day! Please donate, too!


Does agreeing to display my name in the public donor list help the SI in any way?

Social proof. Very useful.

Okay, thanks. Another question: Will my donation be matched even if I donate to the Singularity Institute For AI Canada Association?
I asked Louie Helm, as advised by Joshua Fox. Below is [edit] was his reply, which he's asked me to remove.
3Eliezer Yudkowsky11y
FYI, the SIAI Canada page on the Singularity Institute website still says this: I know if I hadn't asked Louie before donating to SIAI I would have donated to SIAI Canada, thinking it would have the same consequences except I'd get a tax break. I wonder how many thousand of dollars you've lost this way?
Not much, at least not since I took over SI in November 2011. SIAI-CA executed our recommendation for how to spend the last $5k they've spent since November 2011 — though it can be quite a lot of effort to find a good way for SIAI-CA to spend the money from Canada. Even more importantly, we know who all our biggest supporters in Canada are, so we've explained the situation to them personally and they generally donate directly rather than through SIAI-CA.
I suggest you ask Louie Helm.

It helps people like me, who look at it almost like a competition. The more people competing ,the merrier.


Yeah, I wanted to catch Jaan Tallinn on the Top Donors page to prove some random middle-class person could do better charity than the rich types, but he keeps pulling further ahead and I dropped a couple places in the rankings :-/ Gotta work harder!


Not sure if anyone else noticed, but the end date was pushed back until Jan 20. Although personally, I would rather donate to CFAR (and have done so, $500, and another $500 before the fundraiser timeframe.)

What report is that? A site-search for "140,000" turns up a number of figures but none from EY; the latest Form 990 I know of lists his compensation at ~$104k (pg7, summing both columns) or ~50% less than your number.

8Eliezer Yudkowsky11y
I've sometimes earned more than my SIAI base salary from speaking fees, but I've never earned $140K in any year, and will cheerfully exhibit my tax returns if Luke, Holden, or any other sufficiently reputable entity requests them. I've also got no idea what that "estimated extra compensation" line is about, unless it's health insurance or something - per the wishes of Peter Thiel, SIAI never pays $100k in any year to any employee, including bonuses. (Note that, as usual when a poster has received many sufficiently extreme downvotes in their history, I designate them a troll and delete their comments at will.)

Luke, the link in the third line "Now is your chance to double your impact while helping us raise up to $230,000 to help fund our research program" does not work.

Meant to go to Will an editor please fix? I'm working from my phone now.

...unless the donation bar is lagging, slightly less than 1/3rd the hoped-for sum has been filled, with only about 11 days remaining. That's rather worrisome.

The donation bar lags somewhat, and it's normal for most of the funds to come in "at the last minute."

Do we get some kind of reasonable guarantee that there won't in the future be an even better matching offer (say a tripling of our impact), or is the idea here that the value of an SIAI donation is heavily time discounted?

We've never done such a drive in the past and have no current plans for one. We do have a pretty heavy discount rate. Sorry i can't say more.

oooooohhhh, super secret time pressure. Maybe I should donate more...
There doesn't seem to be anything SIAI would gain from running such a program. If big donors are willing to give $N to match donations, if donations are matched dollar-for-dollar then SIAI can reasonably hope to raise $2N in the fundraiser; if donations are matched two-dollars-for-every-dollar, SIAI will only get $(3/2)N. Unless, of course, the big donors would donate more if SIAI sets up the second type of matching program, but why would they? The only scenario I can see where this would make sense is if SIAI expects small donors to donate less than $(1/2)N in a dollar-for-dollar scheme, so that its total gain from the fundraiser would be below $(3/2)N, but expects to get the full $(3/2)N in a two-dollars-for-every-dollar scheme. But not only does this seem like a very unlikely story, even if it did happen it seems that you should want to donate in the current fundraiser if you're willing to do so at all, since this means that more matching funds would be available in the later two-dollars-for-every-dollar fundraiser for getting the other people to donate who we are postulating aren't willing to donate at dollar-for-dollar.
One year later, the roaring success of MIRI's Winter 2013 Matching Challenge, which is offering 3:1 matching for new large donors (people donating >= $5K who have donated less that $5K in total in the past) -- almost $232K out of the $250K maximum donated by the time of writing, with more than three weeks time left, where the Winter 2012 Fundraiser the parent is commenting on only reached its goal of $115K after a deadline extension, and the Summer 2013 Matching Challenge only reached its $200K goal around the time of the deadline -- means that I pretty much need to eat my hat on the "very unlikely story" comment above. (There's clearly an upward growth curve as well, but it does seem clear that lots of people wanted to take advantage of the 3:1.) So far I still stand by the rest of the comment, though:
Given that historically SI has completed all matching drives to 100%, I wouldn't even recommend waiting for a 2x match to donate.

Probably the best of all is to be a matching drive sponsor.

I can't argue with that!


To stay honest though, if someone is reading this thread and planning to do this, they should contact SI now with the amount they're willing to match during a future drive... otherwise they're highly liable to fall prey to donor akrasia.

I seem to recall reading a study that concluded that the multiplier on the match (above 0.5x) doesn't change the increase in donations much. Cursory searching didn't refind it though.

How encouraging is it to people to see comments saying people donated? To me it just seems like kinda self aggrandizing karma whoring. Have you read this thread and been influenced to donate or to donate more?

I was influenced both to donate and to donate more. Social proof is very powerful. I also would not have posted if I didn't think it would encourage people to donate or donate more.

If I didn't hope it would help encourage others, I wouldn't post about my donation. I can't think of reasons that knowing of a donation of mine might discourage others of donating, so I believe it will help encourage them, even if minimally so. Generalizing to "this and similar threads", I think the answer is yes in regards to me.

I highly support changing your name--there's all sorts of bad juju associated with the term "singularity". My advice, keep the new name as bland as possible, avoiding anything with even a remote chance of entering the popular lexicon. The term "singularity" has suffered the same fate as "cybernetics".

[This comment is no longer endorsed by its author]Reply
I note that you've retracted you post, but I still feel the need to ask: shouldn't the name reflect what they do?
In terms of minimizing the status loss for academics affiliating with SIAI, a banal minimally-descriptive name may be superior. People often overestimate the value of the piquant. Beige may not excite, but it doesn't offend. Any term which has the potential to become a buzzword, or acquire alternative definitions, should be avoided. The more exciting the term, the higher the chance of appropriation. This was the point I was trying to make; on rereading it after posting, I realized it was remarkably poorly written and wasn't even clearly conveying what I was thinking when I wrote it. I didn't have time to edit it then, so I retracted.
BTW, here's an interesting blog post about considerations relevant to naming stuff.
Thank you for clarifying.