[ Question ]

What role should LW play in AI Safety?

by Chris_Leong3 min read4th Oct 20212 comments

27

Community
Personal Blog

Many people on LW consider AI Safety either the most, or one of the most, important issues that humanity has to deal with. Surprisingly, I've seen very little discussion about how the LW community slots in here. I'm sure that the Lightcone team has discussed this extensively, but very little of their discussions have made it onto the forum. I hope that they write up some more of their thoughts at some point, so that the community can engage with them, but since there hasn't been much written on this topic, I'll focus mostly on how I see this topic.

I think a good place to begin would be to list the different ways that the Less Wrong community contributes or has contributed towards this project. By the LW community, I mean the broader rationalsphere, although I wouldn't include people who have just posted on LW once or twice without reading it ir itherwise engaging with the community:

a) By being the community out of which MIRI arose
b) By persuading a significant number of people to pursue AI safety research either within academia or outside of it
c) By donating money to AI Safety organisations
d) By providing a significant number of recruits for EA
e) By providing an online space in which to explore self-development
f) By developing rationality tools and techniques useful for AI safety (incl. CFAR)
g) By improving communication norms and practices
h) By producing rationalist or rationalist-adjacent intellectuals who persuade people that AI Safety is important
i) By providing a location for discussing and sharing AI Safety research
j) By creating real-world communities that provide for the growth and development of participants
k) By providing people a real-world community of people who also believe that AI safety is important
g) By providing a discussion space free from some of the political incentives affecting EA
h) More generally, by approaching the problem of AI safety with a different lens than other concerned communities

Some of these purposes seem to have been better served by the EA community. For example, I expect that the EA community is currently ahead in terms of the following:

a) Building new institutions that focus on AI safety
b) Donating money to AI Safety organisations
c) Recruiting people for AI Safety research

The rationality community may very well be ahead of EA in terms of having produced intellectuals who persuaded people that AI Safety is important, but I would expect EA and the academic community to be more important going forward.

I think that LW should probably focus more on the areas where it has a comparative advantage and which takes into account our strengths and weaknesses.

I would list the strengths of the LW community compared to EA as the following:

  • Greater development of and stronger filter for rationality (hopefully we aren't all just wasting our time)
  • Greater intellectual focus and intelligence filter
  • Less subject to political and public relations incentives
  • Stronger concentration of mathematical and programming skills

And I would list our weaknesses as:

  • Less practical focus and operations ability
  • Less co-ordination and unity
  • Less ability to operate in the social landscape
  • Less engagement with academic philosophy

I would list the strengths of the rationality community compared to the academic AI Safety community as the following:

  • Greater development of and stronger filter for rationality
  • Greater focus on actually solving the problem and less temptation to dress up previous research as relevant
  • Less overhead associated with academic publication and faster ability to iterate
  • Less pressure to publish for the sake of publishing
  • Less pressure to maintain respectability

And I would list our weakness as:

  • Less technical skills
  • Less ability to access funding sources outside the AI Safety/Rationality/EA communities
  • More likely to be attempting to make progress on the side of our day jobs

Given this situation, how should LW slot into the AI safety landscape?

(I know that the ontology of the posty is a bit weird as there is overlap between LW/EA/Academia, but despite its limitations, I still feel that this frame is useful)

27

New Answer
Ask Related Question
New Comment
2 comments, sorted by Highlighting new comments since Today at 1:04 AM

By being the community out of which MIRI arose

I would say the LW community arose out of MIRI.

Thanks for pointing this out.