I suggest you may be similarly overestimating the difficulty of explaining your strategic ideas/problems to a sufficiently large audience to get useful feedback...

Yes, I would get some useful feedback, but I also predict a negative effect: When people don't have enough background knowledge to make what I say sound reasonable to them, I'll get penalized for sounding crazy in the same way that I'm penalized when I try to explain AGI to an intuitive Cartesian dualist.

By penalized, I mean something like the effect that Scott Adams (author of Dilbert) encountered while blogging:

I hoped that people who loved the blog would spill over to people who read Dilbert, and make my flagship product stronger. Instead, I found that if I wrote nine highly popular posts, and one that a reader disagreed with, the reaction was inevitably “I can never read Dilbert again because of what you wrote in that one post.” Every blog post reduced my income, even if 90% of the readers loved it. And a startling number of readers couldn’t tell when I was serious or kidding, so most of the negative reactions were based on misperceptions.

Anyway, you also wrote:

The decision theory discussions on LW generated significant progress, but perhaps more importantly created a pool of people with strong interest in the topic (some of whom ended up becoming your research associates). Don't you think the same thing could happen with Singularity strategies?

If so, then not for the same reasons. I think people got interested in decision theory because they could see results. But it's hard to feel you've gotten a result in something like strategy, where we may never know whether or not one strategy was counterfactually better, or at least won't be confident about that for another 5 years. Decision theory offers the opportunity for results that most people in the field can agree on.

If so, then not for the same reasons. I think people got interested in decision theory because they could see results. But it's hard to feel you've gotten a result in something like strategy, where we may never know whether or not one strategy was counterfactually better, or at least won't be confident about that for another 5 years. Decision theory offers the opportunity for results that most people in the field can agree on.

At FHI they sometimes sit around a whiteboard and discuss weird AI-boxing ideas or weird acquire-relevant-influence ideas, and fe... (read more)

5Vladimir_Nesov8yThe "results" in decision theory we've got so far are so tenuous that I believe their role is primarily to somewhat clarify the problem statement for what remains to be done (a big step compared to complete confusion in the past, but not quite clear (-ly motivated) math). The ratchet of science hasn't clicked yet, even if rational evidence is significant, which is the same problem you voice for strategy discussion.

Q&A with new Executive Director of Singularity Institute

by lukeprog 1 min read7th Nov 2011182 comments

26


Today I was appointed the new Executive Director of Singularity Institute.

Because I care about transparency, one of my first projects as an intern was to begin work on the organization's first Strategic Plan. I researched how to write a strategic plan, tracked down the strategic plans of similar organizations, and met with each staff member, progressively iterating the document until it was something everyone could get behind.

I quickly learned why there isn't more of this kind of thing: transparency is a lot of work! 100+ hours of work later, plus dozens of hours from others, and the strategic plan was finally finished and ratified by the board. It doesn't accomplish much by itself, but it's one important stepping stone in building an organization that is more productive, more trusted, and more likely to help solve the world's biggest problems.

I spent two months as a researcher, and was then appointed Executive Director.

In further pursuit of transparency, I'd like to answer (on video) submitted questions from the Less Wrong community just as Eliezer did two years ago.

 

The Rules

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov.

4) If you reference certain things that are online in your question, provide a link.

5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin recording video responses for.

 

I might respond to certain questions within the comments thread and not on video; for example, when there is a one-word answer.

26