[Deadline extended until 22nd of March!] Apply to Seminar to Study, Explain, and Try to Solve Superintelligence Alignment Applications for the AFFINE Superintelligence Alignment Seminar are now open, and we invite you to apply. It will take place in Hostačov, near Prague (Czechia), from 28 April to 28 May.[1] We...
[Context: This post is aimed at all readers[1] who broadly agree that the current race toward superintelligence is bad, that stopping would be good, and that the technical pathways to a solution are too unpromising and hard to coordinate on to justify going ahead.] TL;DR: We address the objections made...
[Co-written by Mateusz Bagiński and Samuel Buteau (Ishual)] TL;DR Many X-risk-concerned people who join AI capabilities labs with the intent to contribute to existential safety think that the labs are currently engaging in a race that is unacceptably likely to lead to human disempowerment and/or extinction, and would prefer an...
(Work done at Convergence Analysis. Mateusz wrote the post and is responsible for the outline of the argument, many details of which crystallized in conversations with Justin. Thanks to Olga Babeeva for the feedback on this post.) 1. Introduction: Clarifying the DSA-AI theses Over the last decade, AI research and...
As the title says. I'm more interested in "up-to-date" than "comprehensive".
(Work done at Convergence Analysis. The ideas are due to Justin. Mateusz wrote the post. Thanks to Olga Babeeva for feedback on this post.) In this post, we introduce the typology of structure, function, and randomness that builds on the framework introduced in the post Goodhart's Law Causal Diagrams. We...
(Work done at Convergence Analysis. Mateusz wrote the post and is responsible for most of the ideas with Justin helping to think it through. Thanks to Olga Babeeva for the feedback on this post.) 1. Motivation Suppose the perspective of pausing or significantly slowing down AI progress or solving the...