12/13/2011 - A 2011 update with data from the 2010 fiscal year is in progress. Should be done by the end of the week or sooner.




  • Images are now hosted on LessWrong.com.
  • The 2010 Form 990 data will be available later this month.
  • It is not my intent to propagate misinformation. Errors will be corrected as soon as they are identified.


Acting on gwern's suggestion in his Girl Scout Cookie analysis, I decided to look at SIAI funding. After reading about the Visiting Fellows Program and more recently the Rationality Boot Camp, I decided that the SIAI might be something I would want to support. I am concerned with existential risk and grapple with the utility implications. I feel that I should do more.

I wrote on the mini-boot camp page a pledge that I would donate enough to send someone to rationality mini-boot camp. This seemed to me a small cost for the potential benefit. The SIAI might get better at building rationalists. It might build a rationalist who goes on to solve a problem. Should I donate more? I wasn’t sure. I read gwern’s article and realized that I could easily get more information to clarify my thinking.

So I downloaded the SIAI’s Form 990 annual IRS filings and started to write down notes in a spreadsheet. As I gathered data and compared it to my expectations and my goals, my beliefs changed. I now believe that donating to the SIAI is valuable. I cannot hide this belief in my writing. I simply have it.

My goal is not to convince you to donate to the SIAI. My goal is to provide you with information necessary for you to determine for yourself whether or not you should donate to the SIAI. Or, if not that, to provide you with some direction so that you can continue your investigation.

The SIAI's Form 990's are available at GuideStar and Foundation Center. You must register in order to access the files at GuideStar.

  1. 2002 (Form 990-EZ)
  2. 2003 (Form 990-EZ)
  3. 2004 (Form 990-EZ)
  4. 2005 (Form 990)
  5. 2006 (Form 990)
  6. 2007 (Form 990)
  7. 2008 (Form 990-EZ)
  8. 2009 (Form 990)

SIAI Financial Overview

The Singularity Institute for Artificial Intelligence (SIAI) is a public organization working to reduce existential risk from future technologies, in particular artificial intelligence. "The Singularity Institute brings rational analysis and rational strategy to the challenges facing humanity as we develop cognitive technologies that will exceed the current upper bounds on human intelligence." The SIAI are also the founders of Less Wrong.

The graphs above offer an accurate summary of SIAI financial state since 2002. Sometimes the end of year balances listed in the Form 990 doesn’t match what you’d get if you did the math by hand. These are noted as discrepancies between the filed year end balance and the expected year end balance or between the filed year start balance and the expected year start balance.

  1. Filing Error 1 - There appears to be a minor typo to the effect of $4.86 in the end of year balance for the 2004 document. It appears that Part I, Line 18 has been summed incorrectly. $32,445.76 is listed, but the expected result is $32,450.41. The Part II balance sheet calculations which agree with the error so the source of the error is unclear. The start of year balance in 2005 reflects the expected value so this was probably just a typo in 2004. The following year’s reported start of year balance does not contain the error.
  2. Filing Error 2 - The 2006 document reports a year start balance of $95,105.00 when the expected year start balance is $165,284.00, a discrepancy of $70,179.00. This amount is close to the estimated Program Service Accomplishments in 2005 Form 990 Part III Line F of $72,000.00. Looks like the service expenses were not included completely in Part II. The money is not missing: future forms show expected values moving forward.
  3. Theft - The organization reported $118,803.00 in theft in 2009 resulting in a year end asset balance lower than expected. The SIAI is currently pursuing legal restitution.

The SIAI has generated a revenue surplus every year except 2008. The 2008 deficit appears to be a cashing out of excess surplus from 2007. Asset growth indicates that the SIAI is good at utilizing the funds it has available, without overspending. The organization is expanding it’s menu of services, but not so fast that it risks going broke.

Nonetheless, current asset balance is insufficient to sustain a year of operation at existing rate of expenditure. Significant loss of revenue from donations would result in a shrinkage of services. Such a loss of revenue may be unlikely, but a reasonable goal for the organization would be to build up a year's reserves.


Revenue is composed of public support, program service (events/conferences held, etc), and investment interest. The "Other" category tends to include Amazon.com affiliate income, etc.

Income from public support has grown steadily with a notable regular increase starting in 2006. This increase is a result of new contributions from big donors. As an example, public support in 2007 is largely composed of significant contributions from Peter Thiel ($125k), Brian Cartmell ($75k), and Robert F. Zahra Jr ($123k) for $323k total in large scale individual contributions (break down below).

In 2007 the SIAI started receiving income from program services. Currently all "Program Service" revenue is from operation of the Singularity Summit. In 2010 the summit generated surplus revenue for the SIAI. This is a significant achievement, as it means the organization has created a sustainable service that could fund further services moving forward.

A specific analysis of the summit is below.


Expenses are composed of grants paid to winners, benefits paid to members, officer compensation, contracts, travel, program services, and an other category.

The contracts column in the chart below includes legal and accounting fees. The other column includes administrative fees and other operational costs. I didn’t see reason to break the columns down further. In many cases the Form 990s provide more detailed itemization. If you care about how much officers spent on gas or when they bought new computers you might find the answers in the source.

I don’t have data for 2000 or 2001, but left the rows in the spreadsheet in case it can be filled in later.

Program expenses have grown over the years, but not unreasonably. Indeed, officer compensation has declined steadily for several years. The grants in 2002, 2003, and 2004 were paid to Eliezer Yudkowsky for work relevant to Artificial Intelligence.

The program expenses category includes operating the Singularity Summit, Visiting Fellows Program, etc. Some of the cost of these programs is also included in the other category. For example, the 2007 Singularity Summit is reported as costing $101,577.00 but this total amount is accounted for in multiple sections.

It appears that 2009 was a more productive year than 2008 and also less expensive. 2009 saw a larger Singularity Summit than in 2008 and also the creation of the Visiting Fellows Program.

Big Donors

This is not an exhaustive list of contributions. The SIAI’s 2009 filing details major support donations for several previous years. Contributions in the 2010 column are derived from http://intelligence.org/donors.  Known contributions of less than $5,000 are excluded for the sake of brevity. The 2006 donation from Peter Thiel is sourced from a discussion with the SIAI.

Peter Thiel and several other big donors compose the bulk of the organization's revenue. It would be good to see a broader base of donations moving forward. Note, however, that the base of donations has been improving. I don't have the 2010 Form 990 yet, but it appears to be the best year yet in terms of both the quantity of donations and the number of individual donors (based on conversation with SIAI members).

Officer Compensation

In 2002 to 2005 Eliezer Yudkowsky received compensation in the form of grants from the SIAI for AI research. It is noted in the Form 990s that no public funds were used for Eliezer’s research grants as he is also an officer. Starting in 2006 all compensation for key officers is reported as salaried instead of in the form of grants.

Compensation spiked in 2006, the same year of greatly increased public support. Nonetheless, officer compensation has decreased steadily despite continued increases in public support. It appears that the SIAI has been managing it’s resources carefully in recent years, putting more money into programs than into officer compensation.

Eliezer's base compensation as salary increased 20% in 2008. It seems reasonable to compare Eliezer's salary with that of professional software developers. Eliezer would be able to make a fair amount more working in private industry as a software developer.

Mr. Yudkowsky clarifies: "The reason my salary shows as $95K in 2009 is that Paychex screwed up and paid my first month of salary for 2010 in the 2009 tax year. My actual salary was, I believe, constant or roughly so through 2008-2010." In this case we would expect to see the 2010 Form 990 show a month reduced salary.

Moving forward, the SIAI will have to grapple with the high cost of recruiting top tier programmers and academics to do real work. I believe this is an argument for the SIAI improving its asset sheet. More money in the bank means more of an ability to take advantage of recruitment opportunities if they present themselves.

Singularity Summit

Founded in 2006 by the SIAI in cooperation with Ray Kurzweil and Peter Thiel, the Singularity Summit focuses on a broad number of topics related to the Singularity and emerging technologies. (1)

The Singularity Summit was free until 2008 when the SIAI chose to begin charging registration fees and accepting sponsorships. (2)

Attendee counts are estimates drawn from SIAI Form 990 filings. 2010 is purported to be the largest conference so far. Beyond the core conference attendees, hundreds of thousands of online viewers are reached through recordings of the Summit sessions. (A)

The cost of running the summit has increased annually, but revenue from sponsorships and registration have kept pace. The conference may have logistic and administrative costs, but it doesn't really impact the SIAI budget. This makes the conference a valuable blend of outreach and education. If the conference convinces someone to donate or in some way directly support work against existential risks, the benefits are effectively free (or at the very least come at no cost to other programs).

Is the Singularity Summit successful?

It’s difficult to evaluate the success of conferences. So many of the benefits are realized downstream of the actual event. Nonetheless, the attendee counts and widening exposure seem to bring immense value for the cost. Several factors contribute to a sense that the conference is a success:

  • In 2010 the Summit became a positive revenue generating exercise in its own right. With careful stewardship, the Singularity Summit could grow to generate a reliable annual revenue for the SIAI.
  • The ability to run an efficient conference is itself valuable. Should it choose to, the SIAI could run other types of conferences or special interest events in the future with a good expectation of success.
  • The high visibility of the Summit plants seeds for future fund raising. Conference attendees likely benefit as much or more from networking as they do from the content of the sessions. Networking builds relationships between people able to coordinate to solve problems or fund solutions to problems.
  • The Singularity Summit has generated ongoing public interest and media coverage. Notable articles can be found in Popular Science (3), Popular Mechanics (4), the Guardian (5), and TIME Magazine (6). Quality media coverage raises public awareness of Singularity related topics. There is a strong argument that a person with an interest in futurist or existential risk consciousness raising reaches a wide audience by supporting the Singularity Summit.

When discussing “future shock levels” -- gaps in exposure to and understanding of futurist concepts -- Eliezer Yudkowsky wrote, “In general, one shock level gets you enthusiasm, two gets you a strong reaction - wild enthusiasm or disbelief, three gets you frightened - not necessarily hostile, but frightened, and four can get you burned at the stake.” (7) Most futurists are familiar with this sentiment. Increased public exposure to unfamiliar concepts through the positive media coverage brought about by the Singularity Summit works to improve the legitimacy of those concepts and reduce future shock.

The result is that hard problems get easier to solve. Experts interested in helping, but afraid of social condemnation, will be more likely to do core research. The curious will be further motivated to break problems down. Vague far-mode thinking about future technologies will, for a few, shift into near-mode thinking about solutions. Public reaction to what would otherwise be shocking concepts will shift away from the extreme. The future becomes more conditioned to accept the real work and real costs of battling existential risk.

SIAI Milestones

This is not a complete list of SIAI milestones, but covers quite a few of the materials and events that the SIAI has produced over the years.



  • Fundraising efforts expand significantly.
  • SIAI hosts the first Singularity Summit at Stanford.



  • SIAI hosts the Singularity Summit in San Jose.
  • SIAI Interview Series is expanded.
  • SIAI begins its summer intern program.


Significant detail on 2009 achievements is available here. More publications are available here.

  • RF Eliezer Yudkowsky completes the rationality sequences.
  • Less Wrong is founded.
  • SIAI hosts the Singularity Summit in New York.
  • RF Anna Salamon speaks on technological forecasting at the Santa Fe institute.
  • SIAI establishes the Visiting Fellows Program. Graduate and under-graduate students within AI related disciplines develop related talks and papers.

Papers and talks from SIAI fellows produced in 2009:

  1. “Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions”, by Steve Rayhawk, Anna Salamon, Tom McCabe, Rolf Nelson, and Michael Anissimov. (Presented at the European Conference of Computing and Philosophy in July ‘09 (ECAP))
  2. “Arms Control and Intelligence Explosions”, by Carl Shulman (Also presented at ECAP)
  3. Machine Ethics and Superintelligence”, by Carl Shulman and Henrik Jonsson (Presented at the Asia-Pacific Conference of Computing and Philosophy in October ‘09 (APCAP))
  4. Which Consequentialism? Machine Ethics and Moral Divergence”, by Carl Shulman and Nick Tarleton (Also presented at APCAP);
  5. “Long-term AI forecasting: Building methodologies that work”, an invited presentation by Anna Salamon at the Santa Fe Institute conference on forecasting;
  6. “Shaping the Intelligence Explosion” and “How much it matters to know what matters: A back of the envelope calculation”, presentations by Anna Salamon at the Singularity Summit 2009 in October
  7. “Pathways to Beneficial Artificial General Intelligence: Virtual Pets, Robot Children, Artificial Bioscientists, and Beyond”, a presentation by SIAI Director of Research Ben Goertzel at Singularity Summit 2009;
  8. “Cognitive Biases and Giant Risks”,  a presentation by SIAI Research Fellow Eliezer Yudkowsky at Singularity Summit 2009;
  9. “Convergence of Expected Utility for Universal Artificial Intelligence”, a paper by Peter de Blanc, an SIAI Visiting Fellow.

* Text for this list of papers reproduced from here.

A list of achievements, papers, and talks from 2010 is pending. See also the Singularity Summit content links above.

Further Editorial Thoughts...

Prior to doing this investigation I had some expectation that the SIAI was a money losing operation. I didn’t expect the Singularity Summit to be making money. I had an expectation that Eliezer probably made around $70k (programmer money discounted for being paid by a non-profit). I figured the SIAI had a broad donor base of small donations. I was off base on all counts.

I had some expectation that the SIAI was a money losing operation.

I had weak confidence in this belief, as I don’t know a lot about the finances of public organizations. The SIAI appears to be managing its cash reserves well. It would be good to see the SIAI build up some asset reserves so that it could operate comfortably in years where public support dips or so that it could take advantage of unexpected opportunities.

Overall, the allocation of funds strikes me as highly efficient.

I didn’t expect the Singularity Summit to be making money.

This was a surprising finding, although I incorrectly conditioned my expectation from experiences working with game industry conferences. I don't know exactly how much the SIAI is spending on food and fancy tablecloths at the Singularity Summit, but I don't think I care: it's growing and showing better results on the revenue chart each year. If you attend the conference and contribute to the event you add pure value. As discussed above, the benefits of the conference appear to be very far in the “reducing existential risk” category. Losing the Summit would be a blow to ensuring a safe future.

I know that the Summit will not itself do the hard work of dissolving and solving problems, or of synthesizing new theories, or of testing those theories, or of implementing solutions. The value of the Summit lies in its ability to raise awareness of the work that needs to be done, to create networks of people to do that work, to lower public shock at the implications of that work, and generate funding for those doing that work.

I had an expectation that Eliezer probably made around $70k.

Eliezer's compensation is slightly more than I thought. I'm not sure what upper bound I would have balked at or would balk at. I do have some concern about the cost of recruiting additional Research Fellows. The cost of additional RFs has to be weighed against new programs like Visiting Fellows.

At the same time, the organization has been able to expand services without draining the coffers. A donor can hold a strong expectation that the bulk of their donation will go toward actual work in the form of salaries for working personnel or events like the Visiting Fellows Program.

I figured the SIAI had a broad donor base of small donations.

I must have been out to lunch when making this prediction. I figured the SIAI was mostly supported by futurism enthusiasts and small scale rationalists.

The organization has a heavy reliance on major donor support. I would expect the 2010 filing to reveal a broadening of revenue, but I do not expect the organization to have become independent of big donor support. Big donor support is a good thing to have, but more long term stability would be provided by a broader base of supporters.

My suggestions to the SIAI:

  • Consider relocating to a cheaper part of the planet. Research Fellows will likely have to accept lower than market average compensation for their work or no compensation at all. Better to live in an area where compensation goes farther.
  • Consider increasing savings to allow for a larger safety net and the ability to take advantage of opportunities.
  • Consider itemizing program service expenses in more detail. It isn’t required, but the transparency makes for better decision making on the part of donors.
  • Consider providing more information on what Eliezer and other Research Fellows are working on from time to time. You are building two communities. A community of polymaths who will solve hard problems and a community of supporters who believe in the efforts of the polymaths. The latter are more likely to continue their support if they have insight into the activities of the former.

Moving forward:

John Salvatier provided me with good insight into next steps for gaining further clarity into the SIAI’s operational goals, methodology, and financial standing.

  • Contact GiveWell for expert advice on organizational analysis to help clarify good next steps.
  • Get more information on current and forthcoming SIAI research projects. Is there active work in the research areas the SIAI has identified? Is there a game plan for attacking particular problems in the research space?
  • Spend some time gathering information from SIAI members on how they would utilize new funds. Are there specific opportunities the SIAI has identified? Where is the organization “only limited by a lack of cash” -- if they had more funds, what would they immediately pursue?
  • Formulate methods of validating the SIAI’s execution of goals. It appears that the Summit is an example of efficient execution of the reducing existential risk goal by legitimizing the existential risk and AGI problem space and by building networks among interested individuals. How will donors verify the value of SIAI core research work in coming years?


At present, the financial position of the SIAI seems sound. The Singularity Summit stands as a particular success that should be acknowledged. The ability for the organization to reduce officer compensation at the same time it expands programs is also notable.

Tax documents can only tell us so much. A deeper picture of the SIAI would work to reveal more of the moving parts within the organization. It would provide a better account of monthly activities and provide a means to measure future success or failure. The question for many supporters will not be “should I donate” but “should I continue to donate?” A question that can be answered by increased and ongoing transparency.

It is important that those who are concerned with existential risks, AGI, and the safety of future technologies and who choose to donate to the SIAI take a role in shaping a positive future for the organization. Donating in support of AI research is valuable, but donating and also telling others about the donation is far more valuable.

Consider the Sequence post ‘Why Our Kind Can’t Cooperate.’ If the SIAI is an organization worth supporting, and given that they are working in a problem space that currently only has strong traction with “our kind,” then there is a risk of the SIAI failing to reach its maximum potential because donors do not coordinate successfully. If you are a donor, stand up and be counted. Post on Less Wrong and describe why you donated. Let the SIAI post your name. Help other donors see that they aren’t acting alone.

Similarly, if you are critical of the SIAI think about why and write it up. Create a discussion and dig into the details. The path most likely to increase existential risk is the one where rational thinkers stay silent.

The SIAI’s current operating budget and donor revenue is very small. It is well within our community’s ability to effect change.


My research has led me to the conclusion I should donate to the SIAI (above my previous pledge in support of rationality boot camp). I already donate to Alcor and am an Alcor member. I have to determine an amount for the SIAI that won't cause wife aggro. Unilateral household financial decisions increase my personal existential risk. :P I will update this document or make a comment post when I know more.




My working spreadsheet is here.

(1) http://www.singularitysummit.com/

(2) http://lesswrong.com/lw/ts/singularity_summit_2008/

(3) http://www.popsci.com/scitech/article/2009-10/singularity-summit-2009-singularity-near

(4) http://www.popularmechanics.com/technology/engineering/robots/4332783

(5) http://www.guardian.co.uk/technology/2008/nov/06/artificialintelligenceai-engineering

(6) http://www.time.com/time/health/article/0,8599,2048138-1,00.html

(7) http://www.sl4.org/shocklevels.html

(A) Summit Content


New Comment
208 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

If anyone would like to help with fundraising for Singularity Institute (I know the OP expressed interest in the other thread), I can offer coordination and organizing support to help make your efforts more successful.

louie.helm AT singinst.org

I also have ideas for people who would like to help but don't know where to start. Please contact me if you're interested!

For instance, you should probably publicly join SIAI's Facebook Cause Page and help us raise money there. We are stuck at $9,606.01... just a tad shy of $10,000 (Which would be a nice psychological milestone to pass!) This makes us roughly the #350th most popular philanthropic cause on Causes.com... and puts us behind other pressing global concerns like the "Art Creation Foundation For Children in Jacmel Haiti" and "Romania Animal Rescue". Seriously!

And, Yes: Singularity Institute does have other funds that were not raised on this site... but so do these other causes! It wouldn't hurt to look more respectable on public fundraising sites while simultaneously helping to raise money in a very efficient, publicly visible way. One project that might be helpful would be for someone to publicly track our assen... (read more)


You are a tireless laborer for good. Thanks for all you do.

Are we supposed to donate through facebook rather than directly?
YES, please do! It is directly in practice.
Good question. I have a recurring direct donation set-up, but maybe donating via the Facebook page will make it more interesting for my friends to have a deeper look (and maybe donate). Does anyone know what % of the donation via the causes app goes to the charity? I'm guessing it's not 100%, so I'm wondering if that x% is worth it to have it announced on Facebook. Although I could just announce it myself, I think I'll do that next time my donation happens.
Causes.com charges 4.75% to cover processing fees, but the added benefit of making your donation so publicly visible using Causes.com / Facebook makes it preferable. We're currently only the #22 most supported cause in the "Education" section. A few hundred dollars more would put us in the top 20.
Yes, I would recommend that.
Upvoted because it's now over 10 k$.

The organization reported $118,803.00 in theft in 2009 resulting in a year end asset balance lower than expected. The SIAI is currently pursuing legal restitution.

It isn't much harder to steal code than to steal money from a bank account. Given the nature of research being conducted by the SIAI, one of the first and most important steps would have to be to think about adequate security measures.

If you are a potential donor interested to mitigate risks from AI then before contributing money you will have to make sure that your contribution does not increase those risks even further.

If you believe that risks from AI are to be taken seriously then you should demand that any organisation that studies artificial general intelligence has to establish significant measures against third-party intrusion and industrial espionage that is at least on par with the biosafety level 4 required for work with dangerous and exotic agents.

It might be the case that the SIAI does already employ various measures against the possibility of theft of sensitive information, yet any evidence that hints at the possibility of weak security should be taken seriously. Especially the possibility that there are potentially untrustworthy people who can access critical material should be examined.

Upvoted for raising some important points. Ceteris paribus, one failure of internal controls is nontrivial evidence of future ones.

For these purposes one should distinguish between sections of the organization. Eliezer Yudkowsky and Marcello Herreshoff's AI work is a separate 'box' from other SIAI activities such as the Summit, Visiting Fellows program, etc. Eliezer is far more often said to be too cautious and secretive with respect to that than the other way around.

I think this is a legitimate concern. It's probably not a significant issue right now, but definitely would be one if SIAI started making dramatic progress towards AGI. I don't think it deserves the downvotes its getting.

Note: the comment has been completely rewritten since the original wave of downvoting. It's much better now.

I agree, this doesn't deserve to be downvoted. It should be possible for the SIAI to build security measures while also providing some transparency into the nature of that security in a way that doesn't also compromise it. I would bet that Eliezer has thought about this, or at least thought about the fact that he needs to think about it in more detail. This would be something to look into in a deeper examination of SIAI plans.
I am more concerned about the possibility that random employees at Google will succeed in making an AGI then I am at SIAI constructing one. To begin with, even if there were only 1000 employees at Google that were interested in AGI and they were only interested in it enough to work 1 hour a month each and they were only 80% as effective as Eliezer (as being some of the smartest people in the world doesn't quite put them on the same level as Eli) then if Eliezer will have AGI in say, 2031 then Google will have it in about 2017.

Personally, I expect even moderately complicated problems -- especially novel ones -- to not scale or decompose at all cleanly.

So, leaving aside all questions about who is smarter than whom, I don't expect a thousand smart people working an hour a month on a project to be nearly as productive as one smart person working eight hours a day.

If you could share your reasons for expecting otherwise, I might find them enlightening.

The idea is that they are sharing their information and findings so that while they are less efficient then working constantly on the problem they are able to point out possible solutions to each other that one person working by himself would be less likely to notice except through a longer process. As there would be between 4-5 people working on the project at any one time during the month I assume they would be working in a group and would stagger the times such that a nearly continuous effort is produced. Also, as much of the problem involves thinking about things, by not focusing on the issue constantly they may be more likely to come up with a solution then if they are focusing on it constantly. This is a hypothetical, I have no idea how many people at Google are interested in AI or how much time they spend on it. I would imagine that there most likely are quite a few people at Google working on AGI as it relates directly to Google's core business and that they work on it significantly more than one hour a month each. (edit the comment with intelligence and Eli was a pun.)
I don’t get it. I can haz Xplanation?
The word Eli can also be used for god, hence the pun.
Oh :-)
You post employs good parallelism of form, yet poor parallelism of substance. EDIT: no longer relevant, but kept for context
I added to the comment and expanded on what I thought would be an obvious inference.
I didn't find your inference too oblique, I found it too inaccurate.
I am at least 95% confident that the procedures that govern access to money in SIAI are both different from and far less strict than the procedures that would be used to govern access to code. Even the woefully obsolete "So You Want To Be A Friendly AI Programmer" document contains procedures that are more strict than this for dealing with actual AI code, and my suspicion is that these are likely to have updated only in the direction of more strictness since. Disclaimer: I am neither an affiliate of nor a donor to the SIAI.
This would certainly be a critical consideration if or when the SIAI was actually doing work directly on AGI construction. I don't believe that is a focus in the near future. There is too much to be done before that becomes a possibility. (Not that establishing security anyway is a bad thing.)
What if you believe in openness and transparency - and feel that elaborate attempts to maintain secrecy will cause your partners to believe you are hiding motives or knowledge from them - thereby tarnishing your reputation - and making them trust you less by making yourself appear selfish and unwilling to share? Surely, then the strategies you refer to could easily be highly counter-productive. Basically, if you misguidedly impose secrecy on the organisations involved then the good guys have fewer means of honestly signalling their altruism towards each other - and cooperating with each other - which means that their progress is slower and their relative advantage is diminished. That is surely bad news for overall risk. The "opposite" strategy is much better, IMO. Don't cooperate with secretive non-sharers. They are probably selfish bad guys. Sharing now is the best way to honestly signal that you will share again in the future.

Is any other information publicly available at the moment about the theft? The total amount stolen is large enough that it's noteworthy as part of a consideration of SIAI's financial practices.

Yes, I want to know that steps have been taken to minimize the possibility future thefts.

Yes, I want to know that steps have been taken to minimize the possibility future thefts.

I don't know all the details of the financial controls, but I do know that members of the board have been given the ability to monitor bank transactions and accounts online at will to detect unusual activity and have been reviewing transactions at the monthly board meetings. There has been a major turnover of the composition of the board, which now consists of three major recent donors, plus Michael Vassar and Eliezer Yudkowsky.

Also, the SIAI pressed criminal charges rather than a civil suit to cultivate an internal reputation to deter repeats.

See here. Incidentally, that post seems a lot like this one.
At this point an admin should undelete the original SIAI Fundraising discussion post. I can't seem to do it myself. I can update it with a pointer to this post.
Surely if the SIAI is pursuing legal restitution, there are documents with more details which can be posted. For example a copy of a civil complaint. (Normally civil complaints are a matter of public record.) Let's face reality: It's totally embarrassing that an organization dedicated to reducing existential risk for humanity was not able to protect itself from an extremely common and mundane risk faced by organizations day after day. Of greater concern is that the SIAI may be falling into the common institutional trap of suppressing information simply because it is embarrassing.

civil complaint

It's a criminal case.

Ok, in that case a copy of the police report. As well as the name of the contractor and a copy of his or her contract. How could producing this information prejudice the SIAI's ability to obtain restitution?

Why is it embarrassing that an organization dedicated to reducing one sort of risk fell prey to a risk of another sort?

For one thing, the two risks are interrelated. Money is the life's blood of any organization. (Indeed, Eliezer regularly emphasizes the importance of funding to SIAI's mission). Thus, a major financial blow like this is a serious impediment to SIAI's ability to do its job. If you seriously believe that SIAI is significantly reducing existential risk to mankind, then this theft represents an appreciable increase in existential risk to mankind.

Second, SIAI's leadership is selling us an approach to thinking which is supposed to be of general application. i.e. it's not just for preventing ai-related disasters, it's also supposed to make you better at achieving more mundane goals in life. I can't say for sure without knowing the details, but if a small not-for-profit has 100k+ stolen from it, it very likely represents a failure of thinking on the part of the organization.

There is an area of thought without which we can't get anywhere, but which is hard to teach, and that is generating good hypotheses. Once we have a hypothesis in hand, then we can evaluate it. We can compare it to alternatives and use Bayes' formula to update the probabilities that we assign the rival hypotheses. We can evaluate arguments for and against the hypothesis by for example identifying biases and other fallacious reasoning. But none of this will get us where we want to go unless we happen to have selected the true hypothesis (or one "true enough" in a restricted domain, as Newton's law of gravity is true enough in a restricted domain). If all you have to evaluate is a bunch of false hypotheses, then Bayesian updating is just going to move you from one false hypothesis to another. You need to have the true hypothesis among your candidate hypotheses if Bayesian updating is ever going to take you to it. Of course, this is a tall order given that, by assumption, you don't know ahead of time what the truth is. However, the record of discoveries so far suggests that the human race doesn't completely suck at it. Generating the true hypothesis (or indeed any hypothesis) involves, I am inclined to say, an act of imagination. For this reason, "use your imagination" is seriously good advice. People who are blindsided by reality sometimes say something like, they never imagined that something like this could happen. I've been blindsided by reality precisely because I never considered certain possibilities which consequently took me by surprise.

I agree. It reminds me of a fictional dialogue from a movie about the Apollo 1 disaster:

Clinton Anderson: [at the senate inquiry following the Apollo 1 fire] Colonel, what caused the fire? I’m not talking about wires and oxygen. It seems that some people think that NASA pressured North American to meet unrealistic and arbitrary deadlines and that in turn North American allowed safety to be compromised.

Frank Borman: I won’t deny there’s been pressure to meet deadlines, but safety has never been intentionally compromised.

Clinton Anderson: Then what caused the fire?

Frank Borman: A failure of imagination. We’ve always known there was the possibility of fire in a spacecraft. But the fear was that it would happen in space, when you’re 180 miles from terra firma and the nearest fire station. That was the worry. No one ever imagined it could happen on the ground. If anyone had thought of it, the test would’ve been classified as hazardous. But it wasn’t. We just didn’t think of it. Now who’s fault is that? Well, it’s North American’s fault. It’s NASA’s fault. It’s the fault of every person who ever worked on Apollo. It’s my fault. I didn’t think the test was hazardous. No one did. I wish to God we had.


Does anyone know if the finances of the Cryonics Institute or Alcor have been similarly dissected and analyzed? That kind of paper could literally be the difference between life and death for many of us.

I'm guessing no. As a CI member, I agree it would be useful.

I would be willing to do this work, but I need some "me" time first. The SIAI post took a bunch of spare time and I'm behind on my guitar practice. So let me relax a bit and then I'll see what I can find. I'm a member of Alcor and John is a member of CI and we've already noted some differences so maybe we can split up that work.

I might be willing to do this, but I am somewhat reluctant because I feel like it might be emotionally taxing. I would however be very enthusiastic about sponsoring someone's else's work and willing to invest a substantial amount. I'm not sure how to go about arranging that, though.

Seconded about investment. That could reduce the collective action problem. One possible implementation here.

If there were a 100-page post written about choosing between Alcor and CI, I'd read it. I plan to be hustling people to sign up for cryonics until I'm a glass statue myself, so the more up-to-date information and transparency, the better.

Feel free to contribute to this thread.
Thirded about investment. This reminds me of the general question of rationality professionalism: something I've brought up before here.
I don't know of anything. Before I did the Girl Scouts reading, I did once briefly look through the Alcor filings to ascertain how much of a loss they ran at. (About 0.7m a year for 2008.)
I went to Alcor recently to interview the staff at length and in detail for due diligence. I could add info in comments to such a post.

This GiveWell thread includes a transcript of a discussion between GiveWell and SIAI representatives.

Michael Vassar is working on an idea he calls the "Persistent Problems Group" or PPG. The idea is to assemble a blue-ribbon panel of recognizable experts to make sense of the academic literature on very applicable, popular, but poorly understood topics such as diet/nutrition. This would have obvious benefits for helping people understand what the literature has and hasn't established on important topics; it would also be a demonstration that there is such a thing as "skill at making sense of the world."

I am a little surprised about the existence of the Persistent Problems Group; it doesn't sound like it has a lot to do with SIAI's core mission (mitigating existential risk, as I understand it). I'd be interested in hearing more about that group and the logic behind the project.

Overall the transcript made me less hopeful about SIAI.


'Persistent Problems Group'? What is this, an Iain Banks novel? :)

(On a side-note, that sounds like a horrible idea. 'Yes, let's walk right into those rapidly revolving blades! Surely our rationality will protect us.')

Horrible, perhaps, but at some point necessary, no?

"Michael Vassar's Persistent Problems Group idea does need funding, though it may or may not operate under the SIAI umbrella."

It sounds like they have a similar concern.

Is there any reason to believe that the Persistent Problems Group would do better at making sense of the literature than people who write survey papers? There are lots of survey papers published on various topics in the same journals that publish the original research, so if those are good enough we don't need yet another level of review to try to make sense of things.

Eric Drexler made what sounds to me like a very similar proposal, and something like this is already done by a few groups, unless I'm missing some distinguishing feature. I'd be very interested in seeing what this particular group's conclusions were, as well as which methods they would choose to approach these questions. It does seem a little tangential to the SIAI's stated mission through.

I'm very impressed with the thoughtfulness of the GiveWell interviewer, Holden Karnofsky.


Google has an "AGI team"?

Yup. I think right now they're doing AGI-ish work though, not "let's try and build an AGI right now". http://www.google.com/research/pubs/author37920.html
Update: please see here.
Google has AFAIK more computer power than any other organization in the world, works on natural language recognition, and wants to scan in all the books in the world. Coincidence?
More or less, yes.
Surely not a coincidence! YouTube too. Plus content providers give lots of stuff not on YouTube directly to Google - so they can keep it off the site. That is also not a coincidence...
Both Page and Looks are very interested in AGI.
They are hosting and sponsoring an AGI conference in August 2011:
Yeah, I found that earlier. I was referring to the line in the linked document that says "A friend of the community was hired for Google's AGI team, and another may be soon." That doesn't seem to be referring just to the conference.
Google is a major machine intelligence company. At least one of their existing products aims pretty directly at general-purpose intelligence. They have a range of other intelligence-requiring applications: translate, search, goggles, street view, speech recognition. They have expressed their AI ambitions plainly in the past: ...however, "Google's AGI team" is interesting phrasing. It probably refers to Google Research. Moshe Looks has worked for Google Research since 2007, goes to the AGI conferences - e.g. see here - and was once described as a SIAI "scientific advisor" on their blog - the most probable source of this tale, IMO. Google Research certainly has some interesting publications in machine intelligence and machine learning.
These two excerpts summarize where I disagree with SIAI: So, SIAI plans to develop an AI that will take over the world, keeping their techniques secret, and therefore not getting critiques from the rest of the world. This is WRONG. Horrendously, terrifyingly, irrationally wrong. There are two major risks here. One is the risk of an arbitrarily-built AI, made not with Yudkowskian methodologies, whatever they will be, but with due diligence and precautions taken by the creators to not build something that will kill everybody. The other is the risk of building a "FAI" that works, and then successfully becomes dictator of the universe for the rest of time, and this turns out more poorly than we had hoped. I'm more afraid of the second than of the first. I find it implausible that it is harder to build an AI that doesn't kill or enslave everybody, than to build an AI that does enslave everybody, in a way that wiser beings than us would agree was beneficial. And I find it even more implausible, if the people building the one AI can get advice from everyone else in the world, while the people building the FAI do not.

I think of it this way:

  • Chance SIAI's AI is Unfriendly: 80%
  • Chance anyone else's AI is Unfriendly: >99%
  • Chance SIAI builds their AI first: 10%
  • Chance SIAI builds their AI first while making all their designs public: <1% (no change to other probabilities)
An AI that is successfully "Friendly" poses an extistential risk of a kind that other AIs don't pose. The main risk from an unfriendly AI is that it will kill all humans. That isn't much of a risk; humans are on the way out in any case. Whereas the main risk from a "friendly" AI is that it will successfully impose a single set of values, defined by hairless monkeys, on the entire Universe until the end of time. And, if you are afraid of unfriendly AI because you're afraid it will kill you - why do you think that a "Friendly" AI is less likely to kill you? An "unfriendly" AI is following goals that probably appear random to us. There are arguments that it will inevitably take resources away from humans, but these are just that - arguments. Whereas a "friendly" AI will be designed to try to seize absolute power, and take every possible measure to prevent humans from creating another AI. If your name appears on this website, you're already on its list of people whose continued existence will be risky. (Also, all these numbers seem to be pulled out of thin air.)
I see no reason an AI with any other expansionist value system will not exhibit the exact same behaviour, except towards a different goal. There's nothing so special about human values (except that they're, y'know, good, but that's a different issue).
You're using a different definition of "friendly" than I am. An 80% chance SIAI's AI is Unfriendly already contains all of your "takes over but messes everything up in unpredictable ways" scenarios. The numbers were exaggerated for effect, to show contrast and my thought process. It seems to me that you think the probabilities are reversed.
One definition of the term explains: See the "non-human-harming" bit. Regarding: Yes, one of their PR problems is that they are implicitly threatening their rivals. In the case of Ben Goertzel some of the threats are appearing IRL. Let us hear the tale of how threats and nastiness will be avoided. No plan is not a good plan, in this particular case.
What do you mean by existential risk, then? I thought things that killed all humans were, by definition, existential risks. What, if anything, do you value that you expect to exist in the long term? Pretty compelling arguments, IMO. It's simple -- the vast majority of goals can be achieved more easily if one has more resources, and humans control resources, so an entity that is able to self-improve will tend to seize control of all the resources and therefore take control of those resources from the humans. Do you have a counterargument, or something relevant to the issue that isn't just an argument?
Not much risk. Hunting down irrelevant blog commenters is a greater risk than leaving them be. There isn't much of a window during which any human is a slightest threat and during that window going around killing people is just going to enhance the risk to it.
The window is presumably between now and when the winner is obvious - assuming we make it that far. IMO, there's plenty of scope for paranoia in the interim. Looking at the logic so far some teams will reason that unless their chosen values get implemented, much of value is likely to be lost. They will then mulitiply that by a billion years and a billion planets - and conclude that their competitors might really matter. Killing people might indeed backfire - but that still leaves plenty of scope for dirty play.
No. Reread the context. This is the threat from "F"AI, not from designers. The window opens when someone clicks 'run'.
Uh huh. So: world view difference. Corps and orgs will most likely go from 90% human to 90% machine through the well-known and gradual process of automation, gaining power as they go - and the threats from bad organisations are unlikely to be something that will appear suddenly at some point.
3Luke Stebbing
If we take those probabilities as a given, they strongly encourage a strategy that increases the chance that the first seed AI is Friendly. jsalvatier already had a suggestion along those lines: A public Friendly design could draw funding, benefit from technical collaboration, and hopefully end up used in whichever seed AI wins. Unfortunately, you'd have to decouple the F and AI parts, which is impossible.
Isn't CEV an attempt to separate F and AI parts?
It's half of the F. Between the CEV and the AGI is the 'goal stability under recursion' part.
1Luke Stebbing
It's a good first step.
I don't understand your impossibility comment, then.
3Luke Stebbing
I'm talking about publishing a technical design of Friendliness that's conserved under self-improving optimization without also publishing (in math and code) exactly what is meant by self-improving optimization. CEV is a good first step, but a programmatically reusable solution it is not. On doing the impossible:
OK, I understand that much better now. Great point.
I wonder if SIAI could publicly discuss the values part of the AI without discussing the optimization part. The values part seems to me (and from what I can tell, you too) where the most good would be done by public discussion while the optimization part seems to me where the danger lies if the information gets out.
Not honestly. When discussing values publicly you more or less have to spin bullshit. I would expect any public discussion the SIAI engaged in to be downright sickening to read and any interesting parts quickly censored. I'd much prefer no discussion at all - or discussion done by other people outside the influence or direct affiliation with the SIAI. That way the SIAI would not be obliged to distort or cripple the conversation for the sake of PR nor able to even if it wanted to.
CEV doesn't seem to fit this description.
CEV is one of the things which, if actually explored thoroughly, would definitely fit this description. As it is it is at the 'bullshit border'. That is, a point at which you don't yet have to trade off epistemic considerations in favor of signalling to the lowest common denominator. Because it is still credible that the not-superficially-nice parts just haven't been covered yet - rather than being outright lied about.
Do you have evidence for this proposition?
I agree entirely with both of wedifrid's comments above. Just read the CEV document, and ask, "If you were tasked with implementing this, how would you do it?" I tried unsuccessfully many times to elicit details from Eliezer on several points back on Overcoming Bias, until I concluded he did not want to go into those details. One obvious question is, "The expected value calculations that I make from your stated beliefs indicate that your Friendly AI should prefer killing a billion people over taking a 10% chance that one of them is developing an AI; do you agree?" (If the answer is "no", I suspect that is only due to time discounting of utility.)
Surely though if the FAI is in a position to be able to execute that action, it is in a position where it is so far ahead of an AI someone could be developing that it would have little fear of that possibility as a threat to CEV?
It won't be very far ahead of an AI in realtime. The idea that the FAI can get far ahead, is based on the idea that it can develop very far in a "small" amount of time. Well, so can the new AI - and who's to say it can't develop 10 times as quickly as the FAI? So, how can a one-year-old FAI be certain that there isn't an AI project that has been developed secretly 6 months ago and is about to overtake it in itelligence?
It is a somewhat complex issue, best understood by following what is (and isn't) said in conversations along the lines of CEV (and sometimes metaethics) when the subject comes up. I believe the last time was a month or two ago in one of lukeprog's posts. Mind you this is a subject that would take a couple of posts to properly explore.
Isn't exploring the consequences of something like CEV pretty boring? Naively, the default scenario conditional on a large amount of background assumptions about relative optimization possible from various simulation scenarios et cetera is that the FAI fooms along possibly metaphysical spatiotemporal dimensions turning everything into acausal economic goodness. Once you get past the 'oh no that means it kills everything I love' part it's basically a dead end. No? Note: the publicly acknowledged default scenario for a lot of smart people is a lot more PC than this. It's probably not default for many people at all. I'm not confident in it.
I don't really understand what this means, so I don't see why the next bit follows. Could you break this down, preferably using simpler terms?
The problem is if one organisation with dubious values gets far ahead of everyone else. That situation is likely to be result of keeping secrets in this area. Openness seems more likely to create a level playing field where the good guys have an excellent chance of winning. Those promoting secrecy are part of the problem here, IMO. I think we should leave the secret projects to the NSA and IARPA. The history of IT shows many cases where use of closed solutions led to monopolies and problems. I think history shows that closed source solutions are mostly good for those selling them, but bad for the rest of society. IMO, we really don't want machine intelligence to be like that. Many governments realise the significance of open source software these days - e.g. see: The government gets really serious about open source.
It's likely to be the result of organizations with dubious values keeping secrets in this area. The good guys being open doesn't make it better, it makes it worse, by giving the bad guys an asymmetric advantage.
We discussed this very recently. The good guys want to form a large cooperatve network with each other, to help ensure they reach the goal first. Sharing is one of the primary ways they have of signalling to each other that they are good guys. Signalling must be expensive to be credible, and this is a nice, relevant, expensive signal. Being secretive - and failing to share - self-identifies yourself as a selfish bad guy - in the eyes of the sharers. It is not an advantage to be recognised by good guys as a probable bad guy. For one thing, it most likey means you get no technical support. A large cooperative good-guy network is a major win in terms of risk - compared to the scenario where everyone is secretive. The bad guys get some shared source code - but that in no way makes up for how much worse their position is overall. To get ahead, the bad guys have to pretend to be good guys. To convince others of this - in the face of the innate human lie-detector abilities - they may even need to convince themselves they are good guys...
You never did address the issue I raised in the linked comment. As far as I can tell, it's a showstopper for open-access development models of AI.
You gave some disadvantages of openness - I responded with a list of advantages of openness. Why you concluded this was not responsive is not clear. Conventional wisdom about open source and security is that it helps - e.g. see Bruce Schneier on the topic. Personally, I think the benefits of openness win out in this case too. That is especially true for the "inductive inference" side of things - which I estimate to be about 80% of the technical problem of machine intelligence. Keeping that secret is just a fantasy. Versions of that are going to be embedded in every library in every mobile computing device on the planet - doing input prediction, compression, and pattern completion. It is core infrastructure. You can't hide things like that. Essentially, you will have to learn to live with the possibility of bad guys using machine intelligence to help themselves. You can't really stop that - so, don't think that you can - and instead look into affecting what you can change - for example, reducing the opportunities for them to win, limiting the resulting damage, etc.
What linked comment?
The first comment here, I believe.
In this case, I'm less afraid of "bad guys" than I am of "good guys" who make mistakes. The bad guys just want to rule the Earth for a little while. The good guys want to define the Universe's utility function.
Looking at history of accidents with machines, they seem to be mostly automobile accidents. Medical accidents are number two, I think. In both cases, technology that proved dangerous was used deliberately - before the relevant safety features could be added - due to the benefits it gave in the mean time. It seems likely that we will see more of that - in conjunction with the overall trend towards increased safety. My position on this is the opposite of yours. I think there are probably greater individual risks from a machine intelligence working properly for someone else than there are from an accident. Both positions are players, though.
Now I'm confused again. Who do you worry about if not the NSA?
I'm having a hard time parsing what that last clause refers to; what is supposed to be better, enslaving or not enslaving?
Why? The SIAI claims they want to build an AI that asks what wiser beings than us would want (where the definition includes our values right before the AI gets the ability to alter our brains). They say it would look at you just as much as it looks at Eliezer in defining "wise". And we don't actually know it would "enslave everybody". You think it would because you think a superhumanly bright AI that only cares about 'wisdom' so defined would do so, and this seems unwise to you. What do you mean by "wiser" that makes this seem logically coherent? Those considerations obviously ignore the risk of bugs or errors in execution. But to this layman, bugs seem far more likely to kill us or simply break the AI than to hit that sweet spot (sour spot?) which keeps us alive in a way we don't want. Which may or may not address your actual point, but certainly addresses the quote.

Thank you very much for doing this. You've clearly put a lot of effort into making it both thorough and readable.

Formulate methods of validating the SIAI’s execution of goals.

Seconded. Being able to measure the effectiveness of the institute is important both for maintaining the confidence of their donors, and for making progress towards their long-term goals.

There's an issue related to the Singularity Summits that is tangential but worth mentioning: Even if one assigns a very low probability to a Singularity-type event occurring the summits are still doing a very good job getting interesting ideas about technology and their possible impact on the world out their and promoting a lot of interdisciplinary thinking that might not occur otherwise. I was also under the impression that the Summits were revenue negative, and even given that I would have argued that they are productive enough to be a good thing.

This is awesome. Thanks for all your hard work. I hope you will consider updating it in place when the 2010 form becomes available?

Yeah, I'll update it when the 2010 documents become available.

Please add somewhere near the top what the SIAI acronym stands for and a brief mention of what they do. I suggest, "The Singularity Institute for Artificial Intelligence (SIAI) is a non-profit research group working on problems related to existential risk and artificial intelligence, and the co-founding organization behind Less Wrong."

Added to the overview section.

Michael Vassar isn't paying himself enough. $52K/yr is not much in either New York City or San Francisco. Or North Dakota, for that matter.

This seems basically like a sub-problem of "SIAI doesn't have enough money", unless you think there's some other program that's being over-funded.
Coming from CT which is pretty expensive, programmer friends of mine who moved to SF have advised, "Don't take a job for less than $90k in SF, as it will feel like subsistence".
0Luke Stebbing
$52k/yr is in line with Eliezer's salary if it's only covering one person instead of two, and judging from these comments, Eliezer's salary is reasonable.
I'm confused by this in three ways. Unless I'm mistaken, in 2009 * Eliezer was not married * Michael was married * Paying higher salaries to married than to single people is a questionable policy, and probably illegal

SIAI seems to be paying the minimum amount that leaves each worker effective instead of scrambling to reduce expenses or find other sources of income. Presumably, SIAI has a maximum that it judges each worker to be worth, and Eliezer and Michael are both under their maximums. That leaves the question of where these salaries fall in that range.

I believe Michael and Eliezer are both being paid near their minimums because they know SIAI is financially constrained and very much want to see it succeed, and because their salaries seem consistent with at-cost living in the Bay Area.

I'm speculating on limited data, but the most likely explanation for the salary disparity is that Eliezer's minimum is higher, possibly because Michael's household has other sources of income. I don't think marriage factors into the question.

I thought I read it was part-time; though I don't see that now.
He is full time. According to the filings he reports 40 hours of work for the SIAI. (Form 990 2009, Part VII, Section A -- Page 7).
From the fact that I can't talk to Michael on the phone for more than 10 minutes without another call coming in, I infer that he works more than 40 hours/week.
I seem to recall someone saying that both Michael Vassar and Eliezer Yudkowsky basically worked 60-hour weeks.
I assume that is 40 hours of work per week.
Yes. 40 per week.

The reason my salary shows as $95K in 2009 is that Paychex screwed up and paid my first month of salary for 2010 in the 2009 tax year. My actual salary was, I believe, constant or roughly so through 2008-2010.

Thanks, I added a note to the text regarding this.

The older draft contains some misinformation. Much is corrected in the new version. I would prefer people use the new version.


I don't know exactly how much the SIAI is spending on food and fancy tablecloths at the Singularity Summit, but I don't think I care: it's growing and showing better results on the revenue chart each year.

Not too much at all -- at the 2010 Summit, food/coffee was served at Boudin nearby rather than using the more expensive catering service of the hotel.

Awesome idea for a post! You've clearly done a lot of thorough research, and I appreciate the fact that you're sharing it with everyone here.

Images are hosted on TinyPic.com and may not be visible behind corporate firewalls.

I understand that there is an image hosting service on LW accessible though the article editor. Any particular reason it was not convenient to use? It's generally better to keep the content local to avoid issues with external services suddenly breaking in a few months/years.

I didn't know about that! I will update the post to use it as soon as I can. Thanks! Most of my work on this post was done by editing the HTML directly instead of using the WYSIWYG editor. EDIT: All of the images are now hosted on lesswrong.

In terms of suggestions for SIAI, I'd like to see SIAI folks write up their thinking on the controversial AI topics that SIAI has taken a stand on, such as this, this, and the likelihood of hard takeoff. When I talk to Eliezer, there's a lot that he seems to take for granted that I haven't seen any published explanation of his thinking for. I get the impression he's had a lot of unproductive AI discussions with people who aren't terribly rational, but AI seems like an important enough topic for him to try to identify and prevent the failure modes that th... (read more)

Just FYI I'm available to answer any IRL questions.
But not virtual ones? ;-)
The answer to your question is dozens of pages long and I've done a ton of writing on it already, I just don't want to spread it around anywhere unless it's part of a complete project. But if you talk to me in person I can share some of it and we can talk about it. I am writing a book but will need more funding to complete it.
Is this at all related to the Peter Platzer Popular Book Project?
Also, have you considered giving us a preview of at least some of your ideas in blog post form so we can see arguments and counterarguments hashed out? Updating SIAI's website, at least, wouldn't hurt.
No, I haven't, releasing them prematurely would ruin their potential impact. What on the website did you want updated?

This is an excellent post! Does anyone know of a similar examination of the Future of Humanity Institute which is led by Nick Bostrom? I just can't evaluate if FHI or SIAI has greater potential to reduce existential risks. And, maybe even more importantly, does the FHI need donations as badly as the SIAI? Any suggestions?

Why don't you look at the filings yourself? It's really not hard, and wouldn't take more than an evening to get a lot of insight - you don't have to do it as thoroughly as Brandon did here. I recently spent an enjoyable* evening reading through and summarizing Edge.org's filings. * I like hunting down puzzles and there were a few in the Edge.org filings for minor things like who was donating to them
2David Althaus
I don't know where I can find them. I've already used guidestar but apparently guidestar only specializes in US non-profits. FHI is a UK non-profit. Maybe I don't have enough google-fu...
Wikipedia doesn't mention any required disclosures, but mentions the Charity Comission which links to Intelligent Giving which merged with New Philantropy Capital. FHI doesn't show up in the CC, NPC, or Guidestar, so I went a little broader afield and tried a site search on the FHI website: budget site:fhi.ox.ac.uk. This immediately turned up FHI's own documentation of its activities and budgets, a 2007 'annual report'; I used part of its title as a new Google search: future of humanity institute achievements report. These seem to be what you want: 1. 2005-2007 2. 2008-2010 (That wasn't so hard, was it?)
0David Althaus
Now I'm feeling kinda stupid but, anyway, thanks a lot! These papers are great!

the SIAI will have to grapple with the high cost of recruiting top tier programmers

Hm, well they're not looking for coders now: Please remember that, at this present time, we are looking for breadth of mathematical experience, not coding skill as such. (Source.) Additionally, I emailed Michael Vassar a description of a rough plan to develop my ability to write code that runs correctly on the first try and he seemed uncertain about the value of implementing it. (Of course, maybe he's uncertain because the other plans of mine I'd shared seemed just as ... (read more)

A perhaps better comparison to make would be how much Eliezer would make as a programmer assuming he had worked as a programmer for the duration of the existence of SIAI. That is something of a middle ground between SIAI valuing Eliezer less because he has been working at SIAI and valuing him ridiculously high because he is the only person who has experience doing his job at the SIAI (because the other geniuses have been working in industry).

Maybe, but if SIAI's goal is just to employ Eliezer for as little money as possible then that's not an important consideration.

The real reason SIAI wants to pay Eliezer money beyond what he needs to subsist by is so he can buy luxuries for himself, have financial security, have whatever part of his brain that associates high income with high status be satisfied, and feel good about his employment at SIAI. These are good reasons to pay Eliezer more than a living wage. If Eliezer didn't have any utility for money beyond the first $50K I don't think it would be sensible to pay it to him more than that. I don't see how hypothetical programming careers come in to any of this.

ETA: I guess maybe the hypotheticals could be important if we're trying to encourage young geniuses to follow Eliezer's path instead of getting careers in industry?

It actually is - because both of the bounds I mentioned would come into the bargaining (if that was how the pay was to be determined). What Eliezer could get elsewhere is one consideration - but what SIAI could get as an alternative to Eliezer. It would only be if Eliezer was an utterly incompetent negotiator that SIAI could force Eliezer to accept the minimum of what he would get elsewhere. And we know he is a lot better at game theory than that! Merely as a less-bad reference point than a programming career based on current programming skills. It was your point (as adapted from the OP), not mine. I'm merely pointing out why the comparison is unreasonable as stated.
Sure, Eliezer's value to SIAI is also an important component to negotiations. But the reference point doesn't actually matter for anything, even if quoting reference points is a common dark arts negotiation tactic (tantamount to saying "it's standard" in order to get people to accept something). I'd guess that most of negotiation is dark arts.

It must be weird to be Eliezer Yudkowsky reading this.

Yep. The way it actually works is that I'm on the critical path for our organizational mission, and paying me less would require me to do things that take up time and energy in order to get by with a smaller income. Then, assuming all goes well, future intergalactic civilizations would look back and think this was incredibly stupid; in much the same way that letting billions of person-containing brains rot in graves, and humanity allocating less than a million dollars per year to the Singularity Institute, would predictably look pretty stupid in retrospect. At Singularity Institute board meetings we at least try not to do things which will predictably make future intergalactic civilizations think we were being willfully stupid. That's all there is to it, and no more.

I have an image of Eliezer queued up in a coffee shop, guiltily eyeing up the assortment of immodestly priced sugary treats. The reptilian parts of his brain have commandeered the more recently evolved parts of his brain into fervently computing the hedonic calculus of an action that other, more foolish types, might misclassify as a sordid instance of discretionary spending. Caught staring into the glaze of a particularly sinful muffin, he now faces a crucial choice. A cognitive bias, thought to have been eradicated from his brain before the SIAI was founded, seizes its moment. "I'll take the triple chocolate muffin thank you" Eliezer blurts out. "Are you sure?" the barista asks. "Well I can't be 100% sure. But the future of intergalactic civilizations may very well depend on it!"

In accordance with the general fact that "calories in - calories out" is complete bullshit, I've had to learn that sweet things are not their caloric content, they are pharmaceutical weight-gain pills with effects far in excess of their stated caloric content. So no, I wouldn't be able to eat a triple chocolate muffin, or chocolate cake, or a donut, etcetera. But yes, when I still believed the bullshit and thought the cost was just the stated caloric content, I sometimes didn't resist.


Luckily a juicy porterhouse steak is a nice stand-in for a triple chocolate muffin. Unfortunately they don't tend to sell them at coffee shops.

Perhaps I'll end my career as a mathematician to start a paleo coffee shop.

I fully expect that less than 0.1% of mathematicians are working on math anywhere near as important as starting a chain of paleo coffee shops. What are you working on?

Fluid dynamics. Considering jumping over to computational neuroscience. I've put some serious thought into a paleo coffee shop. It's definitely on my list of potential extra-academic endeavors if I end up leaving my ivory tower.
A fast search suggests that there aren't any paleo restaurants, and possibly not even paleo sections on menus, so there might just be a business opportunity.
Is coffee in the paleo diet?
There isn't really a rigorous definition of the diet. One guideline some people use is that you shouldn't eat anything you wouldn't eat raw, which excludes beans. Coffee beans aren't actually beans though. I wouldn't be surprised if some people consider coffee not paleo, but there are big names in the paleo scene that drink coffee (Kurt Harris, Art de Vany). Really, I would say paleo is more a philosophy for how to go about honing in on a diet, rather than a particular diet in and of itself. There are hard lines, like chocolate muffins. I don't think coffee is close to that line though.
That surprises me. The paleo diet I know includes meat, which you should cook in order to kill parasites.
You're right, the guideline is not too well worded. You should probably replace "what you wouldn't eat raw" with "what would be toxic to eat raw". Meat is edible raw. There's nothing inherently toxic about uncooked meat. Many other foods require cooking to diminish their toxicity (potatoes, grains, legumes). There's definitely concern about parasites in raw meat, but parasites are not an inherent quality of the meat itself. There's actually a whole raw paleo sub-subculture. I wouldn't recommend it personally, and I'm not keen to try it myself, but it's there.
There's also a theory that the development of cooking was responsible for the evolutionary Great Leap Forward.
I think it's likely humans are evolved to eat cooked food. The guideline don't eat anything you wouldn't eat raw isn't intended to dissuade people to not eat cooked food, but rather to serve as a heuristic for foods that were probably less commonly eaten by our ancestors. It's unclear to me how accurate the heuristic is. A big counterexample is tubers. Tubers are widely eaten by modern hunter-gatherers and are toxic when uncooked.
Tea might be even if coffee isn't.
Is this based on taubes or is there more to support it? I found his demolition of calories in calories out fairly convincing but wasn't wholly convinced by his carb-demonizing
In what way do you consider "calories in - calories out" complete bullshit? (My guess as to your answer: knock-on effects w.r.t. a Seth-Roberts-style set point of some kind.)
Probably in the same sense that people mean, under generous interpretations, when they say "The Laffer Curve is bullshit" -- which is to say, it's technically true, but not a relevant insight for this part of the problem space, given more significant factors in success.
Sure. I'm curious about what EY sees as the specific "more significant factors in [why sweet things are obstacles to] success [in excess of their stated calories]".
Oh, okay. Probably should have known I couldn't provide what you were looking for, but I wanted to get in a jab at confused critiques of the Laffer Curve and confused applications of conservation of energy to weight loss. :-)
Very skillful exploitation of the humor potential of this thread of conversation! Bravo!
Thankyou. That appears to be an entirely reasonable explanation. (Where 'explanation' is not to signal 'needed to be justified' but rather 'interesting to know'.)
That is rather peculiar reasoning to hear from you. You seem to be acting with a level of self-importance that would only be justified if there will be some future being that will torture trans-Singularity trans-humans for not having done enough to accelerate the onset of the Singularity. And that's just stupid.
If citing BATNAs in negotiations is 'dark arts' then the phrase is even more useless than I thought.
So in the language of BATNA, my point was that if a high powered software development career was not Eliezer's BATNA, citing it in a negotiation would be a dark arts move, just like putting a sign in a grocery store next to a display of tomato cans that said "limit 12 per customer" would be a dark arts move (exploiting anchoring and adjustment). My current model for negotiations is that the only two numbers that really matter are the maximum price the buyer will pay and the minimum price the seller will sell at. Figuring out what number actually gets chosen (which will be somewhere in between those two) is where the dark art of negotiating comes in. You could cite your BATNA as evidence that indicates your walkaway point was some high or low value, in an attempt to reveal information about your preferences, but if you cited a BATNA that you didn't actually have that'd be a dark arts tactic. I'd define dark arts as "attempting to influence the actions of others in ways other than helping them arrive at more correct beliefs or resolve inconsistent preferences". I'm not entirely certain about the "resolve inconsistent preferences" bit though--this is kind of a weird zone, in my thinking--what part of me decides how to resolve inconsistencies in my preferences? Is it my preferences themselves? I hate to be this deep in this discussion because I don't mean to dis Eliezer, but I'd love to improve my thinking about negotiations if it's wrong.
I'd never heard that phrase before. Thanks for introducing it to me. EDIT: To be clearer, I mean BATNA, not "dark arts."
He can easily make that working for a hedge fund or a bank, you don't need that great programming skills if you know some math. Speaking from experience.
Even without a degree? That's really interesting. Is it all of the hedge funds or just a limited subset? I guess Eliezer could probably leverage his reputation for six-figure employment.
Probably without a degree if you have either some connections or chutspah. I can't say for all, some might have some strictly stupid policies.

Formulate methods of validating the SIAI’s execution of goals. It appears that the Summit is an example of efficient execution of the reducing existential risk goal by legitimizing the existential risk and AGI problem space and by building networks among interested individuals. How will donors verify the value of SIAI core research work in coming years?

This is the key to assessing organizational effectiveness. There are various outputs we can measure: The growth of the LW community and its activities are surely important ones. We might also want to have... (read more)

3David Althaus
IMO a survey of lesswrong-members would be extremely helpful! Presumably many of us ponder on the best way to reduce existential risks relating to their own abilities and interests.( Not everyone is smart enough to work on FAI.....sigh) If I only knew the experiences of some lesswrong-members in various occupational areas or academic fields and their pros and cons I may spot my optimal career path, which would further my efficiency, wealth, happiness and influence and hence maximize my contribution to save the world! I bet many of you feel the same way! Well, maybe off-topic, but I think better coordination among ourselves would be awesome.
Yes, I'm happy to work on this and make a google form. The questions would be very basic demographic stuff like age, gender, occupation, highest level of education attained, fields of study, country of residence... The next step would be getting a high participation.
2David Althaus
This would be great! Apparently 2 years ago Yvain also conducted a survey.
Hmm. It might be useful to repeat Yvain's survey to see if there's been any change, while adding the edu/work stuff in.
Wasn't that largely because they wanted to do research on Indian charities?
Yes; I recently subscribed to GiveWell's blog. They've published several articles on what they learned by moving to India (1 2 3 4). It sounds like they learned a lot and would have been justified in the move even if it hadn't saved any money.

Great work!

The money is isn’t missing, though.

I assume the "is" is a typo.

Also: thank you for this post. My confidence in the SIAI has been bolstered.

Not a typo, a quantum superposition.
Typo fixed.

Given that Eliezer earns more than me (or at least did while $AUD wasn't as strong) I am a little curious as to how much he donates to charity? I mean, if he is going to call other to donate...

It's a little trite to say it but there is an underlying topic there of some interest. A balance between conflicting signalling incentives as well as the real practical question of how actual caring works in practice at the extreme end.


I am a little curious as to how much he donates to charity? I mean, if he is going to call other to donate...

His call takes the form "work where you have a comparitive advantage, and donate the money where it will do the most expected good." In his case, his comparative advantage lines up exactly with his expectation of maximum good, so the only rational way for him to give money to charity is to reduce his salary until further reductions would reduce his efficacy at saving the world.

Which is what he's said he does.

Obviously. The interesting question is regarding where the line is where keeping money for yourself is to be preferred to redirecting those resources to world saving. This is a question that anyone who is altruistically minded (or who merely wants to live in the long term) must consider. In Eliezer's case it seems like the line is at about $100k. That line will vary somewhat from person to person but the data point is still of interest.
One relevant note: cost of living is quite high in the San Francisco Bay Area where Eliezer lives.
One obvious workaround might be to live elsewhere. (What are the advantages of living there in particular? Better networking opportunities, maybe? That would certainly be worth the cost -- but might it be more efficient to instead have someone whose full-time job was networking to be the one living there, as opposed to someone mostly focused on theoretical research.)
Splitting SIAI to separate geographic locations (polymaths in Cheapistan and some people doing networking in the Bay Area) would create costs in the sense that communication is more tedious with e-mails than face-to-face. I wouldn't want to split the working community either if I was working there (assuming that the community is good which I believe it is). Your question about what the Bay Area benefits are, is a good one. Are there other benefits besides networking?
Skype is your third alternative. (I disagree with this argument, but agree with conclusion for other reasons.)
It's also


I suppose it's sort of a Disclaimer (you have not owned anything which is now SIAI), but Disclosure seems more accurate still.

Thank you for the summary, I have considered looking for information on SIAI's costs before, so presenting it in a readable way is helpful.

Hm. I'd rather have seen more of the analysis on whether what they do with the money is useful, but this is something.

[This comment is no longer endorsed by its author]Reply

My conclusion is not the same as yours, but this is a very good and helpful overview.

Care to explain how your conclusion is different and why? Thanks :)