## LESSWRONGLW

If she says that the strategies they explore would even alienate some people associated with LW, let alone SI, then that's really bad.

I disagree. LWers have a range of opinions on AI & the singularity (yes, those opinions are less diverse than the general population's, but I don't see them being sufficiently less diverse for your argument to go through). There are already quite a few LWers who're SI sceptics to a degree. I'm also sure there are LWers who, at the moment, basically agree with SI but would spurn it if it announced a more specific strategy for handling AI/the singularity. I think this would be true for most possible strategies SI could announce. I'd expect the same basic argument to hold for SI (though I'm less sure because I know less about SI).

I think you underestimate the amount of information that a natural language sentence can carry and signal.

Quite possible! But in any case, a sentence can carry lots of information about one thing, but not another. One has to look at the probability of a sentence or claim conditional on a specific thing. As I see it, P(AS says some people would be alienated | SI has a terrible secret strategy) is about equal to P(AS says some people would be alienated | SI has an un-terrible secret strategy), so the likelihood ratio is about one, and AnnaSalamon's belief discriminates poorly between those two particular hypotheses.

It is abundantly clear that SI is really bad at PR. I assign a high probability to the possibility that her and other members of the SI are revealing a lot of what is going on behind the scenes by being careless about their communication.

Plausible, but I doubt it's true for this specific example.

As I see it, P(AS says some people would be alienated | SI has a terrible secret strategy) is about equal to P(AS says some people would be alienated | SI has an un-terrible secret strategy), so the likelihood ratio is about one...

If I was to accept your estimation then the associated utility of P(people alienated | terrible strategy) and P(people alienated | un-terrible strategy) would force you to act according to the first possibility.

0XiXiDu8yBlah blah blah...full stop. We're talking about the communication of primates with other primates. Evolution honed your skills to detect the intention and possible bullshit in the output of other primates. Use your intuition! I am not sure what you are getting at. If she thinks that there are strategies that should be kept secrete for political reasons or whatever and admits it, that's bad from any possible viewpoint.

# 26

Today I was appointed the new Executive Director of Singularity Institute.

Because I care about transparency, one of my first projects as an intern was to begin work on the organization's first Strategic Plan. I researched how to write a strategic plan, tracked down the strategic plans of similar organizations, and met with each staff member, progressively iterating the document until it was something everyone could get behind.

I quickly learned why there isn't more of this kind of thing: transparency is a lot of work! 100+ hours of work later, plus dozens of hours from others, and the strategic plan was finally finished and ratified by the board. It doesn't accomplish much by itself, but it's one important stepping stone in building an organization that is more productive, more trusted, and more likely to help solve the world's biggest problems.

I spent two months as a researcher, and was then appointed Executive Director.

In further pursuit of transparency, I'd like to answer (on video) submitted questions from the Less Wrong community just as Eliezer did two years ago.

The Rules

2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov.

4) If you reference certain things that are online in your question, provide a link.

5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin recording video responses for.

I might respond to certain questions within the comments thread and not on video; for example, when there is a one-word answer.