Start: Friday, November 29, 10am
End: Sunday, December 1, 7pm
Location: EA Hotel, 36 York Street, Blackpool

AIXSU is an unconference on AI and existential risk strategy. As it is an unconference, the event will be created by the participants. There will be an empty schedule which you, the participants, will fill up with talks, discussions and more.

AIXSU is inspired by TAISU, which was a successful AI Safety unconference at the EA Hotel in August. The AI and existential risk strategy space seems to be in need for more events and AIXSU hopes to close this gap a bit. The unconference will be three days long.

To enable high-level discussion during the unconference, we require that all participants have some prior involvement with AI or existential risk strategy. AI and existential risk strategy concerns the broad spectrum of things we need to solve in order for humanity to handle the technological transitions ahead of us. Topics of interest include but are not limited to: Macrostrategy, technological forecasting, technological scenarios, AI safety strategy, AI governance, AI policy, AI ethics, cooperative principles and institutions, and foundational philosophy on the future of humanity. Here is an incomplete list of sufficient criteria:

  • You have participated in one of the following: Strategy, ideas, and life paths for reducing existential risks, AI Safety Camp, MSFP/AISFP, Human-aligned AI Summer School, Learning-by-doing AI Safety workshop, and have an interest in strategic questions.
  • Are currently or have previously worked for or interned at an established existential risk reduction organization
  • Have published papers or sufficiently high quality blog posts on strategy-related topics
  • Combination of involvement in AI safety or other existential risk work with interest in strategy. For example, you’ve worked on AI safety on and off for a few years and also have an active interest in strategy-related questions.
  • You are pursuing a possible future in AI strategy or existential risk strategy and have read relevant texts on the topic.

If you feel uncertain about qualifying, please feel free to reach out and we can have a chat about it.

You can participate in the unconference as many or as few days as you would like to. You are also welcome to stay longer at the EA Hotel before or after the unconference.

Price: Pay what you want (cost price is £10/person/day).
Food: All meals will be provided by EA Hotel. All food will be vegan.
Lodging: The EA hotel has two dorm rooms that have been reserved for AIXSU participants. If the dorm rooms are filled up enough, or if you would like your own room, there are many nearby hotels that you can book. We will provide information on nearby hotels.

Attendance is on a first-come, first-served basis. Make sure to apply soon if you want to secure your spot.

Apply to attend AIXSU here

New to LessWrong?

24

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 8:53 PM

Earlier this year we hosted a X-Risk Strategy workshop with the Convergence Team in Cologne (https://forum.effectivealtruism.org/posts/cPZ9w2Wxxu2kA9EDg/workshop-strategy-ideas-and-life-paths-for-reducing) with around 20 participants from around Germany.

The overall rating was on a scale from -3 to +3 with 0 indicating an average workshop event at ~ +2. We had overall a very positive feedback from participants with a wide range of backgrounds:

"I believe the workshop has helped me to internalize the goal of reducing X-risks. As a result, I anticipate that I will take more concrete steps towards a career in this area than I otherwise would have."
"I really got a motivational boost, especially thanks to the conversation I had with Justin in the evening.
It has become stronger (just meeting like minded people had a lot of influence) but also the possibility of negative impact is now more prominent and I will take it into account more."
"I am still of the opinion that X-Risk is one of the most important causes to tackle, possibly even stronger now. I enjoyed being among other Effective Altruists and I feel as if I have a better impression of the community now."
"I am now more motivated to self-study and maybe try and build something before continuing my university studies."
...

Negative feedback included 'no major insights' or' too much focus on helping non-math people understanding models' which is somehow expected in a diverse crowd.

As an organizer of this event I'm likely to be biased but this event was

  • helpful and valuable to novices (I cannot say that we had real experts around) and educated them about risks and potential downsides of this topic
  • considered valuable from people with a strong interest in AI safety. And I got much positive feedback from people from other EA communities for hosting such an event.

All in all, I would be surprised seeing major downsides to this event and I'm pretty confident that participants will benefit overall.