Thanks to the hard work and cooperation of Singularity Institute staff and volunteers, especially Louie Helm and Luke Muehlhauser (lukeprog), we now have a Strategic Plan, which outlines the near-term goals and vision of the Institute, and concrete actions we can take to fulfill those goals.

http://singinst.org/blog/2011/08/26/singularity-institute-strategic-plan-2011/

We welcome your feedback. You can send any comments to institute@intelligence.org

The release of this Strategic Plan is part of an overall effort to increase transparency at Singularity Institute.

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 6:39 AM

Thank you so much for doing this. It makes a very big difference.

Some comments:

Strategy #1, Point 2e seems to cover things that should be either in point 3 or 4. Also points 3 and 4 seem to bleed into each other

If the Rationality training is being spun off to allow Singinst to focus on FAI, why isn't the same done with the Singularity summit? The slightly-bad faith interpretation for the lack of explanation would be that retaining the training arm has internal opposition while the summit does not. If this is not an inference you like, this should be addressed.

The level 2 plan includes " Offer large financial prizes for solving important problems related to our core mission". I remember cousin_it mentioning that he's had very good success asking for answers in communities like MathOverflow, but the main cost was in formalizing the problems. It seems intuitive that geeks are not too much motivated by cash, but are very much motivated by a delicious open problem (and the status solving it brings). Before resorting to 'large financial prizes', shouldn't level 1 include 'formalize open problems and publicise them'?.

Thank you again for publishing a document so that this discussion can be had.

If the Rationality training is being spun off to allow Singinst to focus on FAI, why isn't the same done with the Singularity summit? The slightly-bad faith interpretation for the lack of explanation would be that retaining the training arm has internal opposition while the summit does not. If this is not an inference you like, this should be addressed.

Just throwing it out there: It's the SIAI not the RIAI.

Right now one could be legitimately confused given that Eliezer is working on Rationality books and some of their more visible programs are rationality training.

This spin-off makes sense: The SIAI's goal is not improving human rationality. The SIAI's goal is to try to make sure that if a Singularity occurs that it is one that doesn't destroy humanity or change us into something completely counter to what we want.

This is not the same thing as improving human rationality. The vast majority of humans will do absolutely nothing connected to AI research. Improving their rationality is a great goal, and probably has a high pay-off. But it is not the goal of the SIAI. When people give money to the SIAI they expect that money to go towards AI research and related issues, including the summits. Moreover, many people who are favorable to rational thinking don't necessarily see a singularity type event as at all likely. Many even in the more sane end of the internet (e.g. the atheist and skeptics movements) consider it to be one more fringe belief, associating it with careful rational thinking is more likely to bring down LW-style rationality's status than to raise the status of singularity beliefs.

From my own perspective, as someone who agrees with a lot of the rationality, considers a fast hard-take off of AI to be unlikely, but thinks that it is likely enough that someone should be paying attention to it, this seems like a good strategy.

If the Rationality training is being spun off to allow Singinst to focus on FAI, why isn't the same done with the Singularity summit? The slightly-bad faith interpretation for the lack of explanation would be that retaining the training arm has internal opposition while the summit does not. If this is not an inference you like, this should be addressed.

Just speculation here, but the rationality training stuff seems to have very different scalability properties than the rest of Singinst; in the best case, there could end up being a self-supporting rationality training program in every major city. That would be awesome, but it could also dominate Singinst's attention at the expense of all the other stuff, if it wasn't partitioned off.

Thanks for your comments.

It may be the case that the Singularity Summit is spun off at some point, but the higher priority is to spin off rationality training. Also see jimrandomh's comment. People within SI seem to generally agree that rationality training should be spun off, but we're still working out how best to do that.

Before resorting to 'large financial prizes', shouldn't level 1 include 'formalize open problems and publicise them'?

Yes. I'm working (with others, including Eliezer) on that project right now, and am quite excited about it. That project falls under strategy 1.1.

It appears that all the responses to my comment perceive me to be recommending the Summit be spun off. I am not saying anything like that. I am commenting on the document and presenting what I think is a reasonable question in the mind of a reader. So the point is not to convince me that keeping the summit is a good idea. The point is to correct the shape of the document so that this question does not arise. Explaining how the Summit fits into the re-focused mission but the rationality training does not would do the trick.

I'm particularly happy that you are working on formalizing the problems. Does this represent a change (or compromise) in E's stance on doing research in the open?

I'm particularly happy that you are working on formalizing the problems. Does this represent a change (or compromise) in E's stance on doing research in the open?

I don't think it was ever Eliezer's position that all research had to be done in secret. There is a lot of Friendliness research that can be done in the open, and the 'FAI Open Problems' document will outline what that work is.

Before resorting to 'large financial prizes', shouldn't level 1 include 'formalize open problems and publicise them'?

The trouble is, 'formalizing open problems' seems like by far the toughest part here, and it would thus be nice if we could employ collaborative problem-solving to somehow crack this part of the problem... by formalizing how to formalize various confusing FAI-related subproblems and throwing this on MathOverflow? :) Actually, I think LW is more appropriate environment for at least attempting this endeavor, since it is, after all, what a large part of Eliezer's sequences tried to prepare us for...

I especially like the following points:

  • 1.1. Clarify the open problems relevant to our core mission.
  • 1.5. Estimate current AI risk levels.
  • 2.2.b. Make use of LessWrong.com for collaborative problem-solving (in the manner of the
  • earlier LessWrong.com progress on decision theory).
  • 2.3. Spread our message and clarify our arguments with public-facing academic deliverables.

What I would add to the list is to directly and publicly engage people like Holden Karnofsky from GiveWell or John Baez. They seem to have the necessary background knowledge and know the math. If you can convince them, or show that they are wrong, you defeated your strongest critics. Other people include Katja Grace and Robin Hanson. All of them are highly educated, have read the sequences and disagree with the Singularity Institute.

I admit that you pretty much defeated Hanson and Baez as they haven't been able or willing to put forth much substantive criticism regarding the general importance of an organisation like the Singularity Institute. I am unable to judge the arguments made by Grace and Karnofsky as they largely evade my current ability to grasp the involved math, but judged by the upvotes of the latest post by Karnofsky and his position I suppose that it might be a productive exercise to refute his objections.

SI has an internal roadmap of papers it would like to publish to clarify and extend our standard arguments, and these would at the same time address many public objections. At the same time, we don't want to be sidetracked from pursuing our core mission by taking the time to respond to every critic. It's a tough thing to balance.

Having contributed a significant amount (for me; $500) during the last matching drive in January, I was not considering donating during this round, especially after reading the disappointing interviews with GiveWell. This document changes that, especially seeing action points for increasing transparency and efficiency, and outreach to other organizations. I'm very pleased to see SI reacting to the criticisms. I have just donated another $500.

What disappointed you in the GiveWell interviews?

Awesome! I know a lot of people were (are?) wary of donating without a clearer understanding of what the money will do and how SI will ACTUALLY mitigate risk from Unfriendly AI. I didn't voice this opinion personally, but I was curious.

Not to be a jerk, but on the fifth page there is a typo. The fourth part of Strategy 2 says,

"4. Build more relationships the optimal philanthropy, humanist, and critical thinking communities, which share many of our values."

Shouldn't there be a "with" after the word "relationships"?

Not to be a jerk, but

Hey. Friends tell friends about typos.

I'm glad that you've done this! I look forward to seeing the list of open problems you intend to work on.

...open problems you intend to work on.

You mean we? :)

...and we can start by trying to make a list like this, which is actually a pretty hard and important problem all by itself.

I said "you" because I don't see myself as competent to work on decision theory-type problems.

Time to level-up then, eh? :)

(Just sticking to my plan of trying to encourage people for this kind of work.)

Or such problems are not Normal Anomaly's comparative advantage, and her time is actually better spent on other things. :P

Yeah, I'm actually leveling toward working in neuroscience.

[-][anonymous]13y00
[This comment is no longer endorsed by its author]Reply