The application deadline has been extended by another 2 weeks. You can apply by 22nd of March: https://minily.org/affine
How much technical knowledge of either existing AI systems or mathematics beyond a couple semesters of college calculus do you expect participants to have at the beginning of the session?
Not necessary, there will be a wide array of learning objectives and some will have maths or AI prerequisites, but not all. Being a strong truth-seeker and fast learner is far more important than domain knowledge.
I really like the focus on superintelligence alignment and theory. Many current intro programs are trying to be "very empirical" which sometimes results in transitioning into something like corporate upskilling for people who are going to do finetuning of enterprise LLMs, both in terms of program content and selection of participants. It's not to say that being empirical is bad, it's just that a clear large-scale methodological picture should be delivered as well, and people should also be filtered for seriousness about it.
I clicked on the application link and got a message that said "Sorry, the page you were looking for was not found." Did the application-window close early, or is that a glitch?
Hi! I’m considering applying and I'd like to know more about the day-to-day time commitment. Roughly how many hours per day are expected, and how much of that is scheduled vs. flexible “work on your own time”?
Also, is there some personal free time to allocate as desired during the month, and are there any planned group activities that aren’t directly related to the curriculum (social outings, exercise, etc.)?
We are open to participating for a fraction of the duration (e.g., 2 or 3 weeks) or even to fully remote participation for exceptional applicants. Participation on-site will require full-focus full-time engagement with the program.
Roughly how many hours per day are expected, and how much of that is scheduled vs. flexible “work on your own time”?
Weekends will be off/free by default. Monday to Friday: probably around 8 hours per day, with some adjustments for the participants' personal capacity. Depending on whether it feels too little or too much for the participants, we may adjust the default amount of working time up or (more likely) down.
Regarding scheduled vs work on your own time: We prefer the participants to be in the work/learn mode during the scheduled work/learn time and rest during the off time (although intellectually active rest is encouraged).
Also, is there some personal free time to allocate as desired during the month, and are there any planned group activities that aren’t directly related to the curriculum (social outings, exercise, etc.)?
I think what I wrote above about the default off time during weekends and our openness to participating for a fraction of the period answers the part of the question regarding personal free time.
There will be planned group activities not directly related to the curriculum. We also encourage the participants to bottom-up organize their own initiatives, whether it's a trip to Prague, a hike, a circling session, or whatever else.
[Deadline extended until 22nd of March!]
Apply to Seminar to Study, Explain, and Try to Solve Superintelligence Alignment
Applications for the AFFINE Superintelligence Alignment Seminar are now open, and we invite you to apply. It will take place in Hostačov, near Prague (Czechia), from 28 April to 28 May.[1]
We are working on a draft of the learning materials, in consultation with world-experts.
KEY INFO
8th22nd of MarchGoal
The main purpose of the Seminar is to give promising newcomers to AI alignment an opportunity to acquire a deep understanding of some large pieces of the problem, making them better equipped for work on the mitigation of AI existential risk.
ASI breaking out of human control and pursuing its ends misaligned with human flourishing is a central form of catastrophic and existential risk model caused by AI. Despite that, research aiming at finding a solution to the ASI alignment problem is systematically neglected by the general AI Safety ecosystem, which instead largely focuses on spending resources on many things that are broadly related to it (monitoring, steering, and measuring relatively small amounts of optimization, etc.).
It is our goal to fix this inadequacy and provide more people with the prerequisites to tackle the core problems of superintelligence alignment.
The problem at hand is very difficult, so we will be focusing on learning and distillation and debate/epistemics practice for the full month, rather than trying to produce novel research.
Strategy
The program will concentrate on learning outcomes: topics or concepts, which have a good chance of being relevant to superintelligence alignment. The participants will try to understand a topic by reading the materials, thinking about it, talking about it to other participants and mentors, and finally solidifying their understanding by trying to teach the topic to other participants, especially in the form of 1-on-1 peer teaching, but also lectures, or written materials. The alpha version of the list of learning outcomes can be found here.
There will also be lectures, workshops, debates, and discussions.
The Czech countryside setting removes urban distractions while providing space for both focused solo work and spontaneous collaboration. The program rhythm will alternate between intensive technical engagement and explicit recovery time, preventing the burnout that plagues many month-long intensives.
We expect to accept up to 30 mentees, in addition to a number of mentors, on-site as well as remote. We will have two full-time on-site mentors: Ouro (ex-Orthogonal) and Jonas Hallgren (Equilibria Network). Other confirmed mentors include: Abram Demski, Ramana Kumar, Steve Byrnes, Kaj Sotala, Kaarel Hänni, Cole Wyeth, Aram Ebtekar, Elliot Thornley, Linda Linsefors, and Paul ‘Lorxus’ Rappoport.
If funding allows, we will extend the Seminar to a full-year fellowship for the ~10 most promising candidates.
Crucially, the selection for continuation into the year-long fellowship will happen because of collaborative excellence, not despite it. We’re looking for participants who help others learn, who integrate across disciplines, and who build rather than hoard knowledge. The goal extends beyond producing ten individual researchers to creating a cohesive network that continues collaborating after the month ends, whether at CEEALAR or elsewhere.
The full-year fellows will be selected partly based on how good they were at collaborating with other participants. Active encouragement of collaboration will help us cultivate the collaborative spirit of the environment in the face of potential adversariality arising from competition for the extended Fellowship. To encourage ambitious approaches not guaranteed to work, we do not expect novel and promising research outputs within the one-year time frame of the program. It would, however, be a very welcome surprise.
We are particularly — but by no means exclusively — interested in people who have not yet had a chance to engage in depth with the AI alignment problem and AI existential risk.
You can find more information in the Google Doc and in the Manifund post.
Interested?
If you are interested, please fill out this form by the
8th22nd of March so that we can schedule an interview with you. If any questions arise, send them by replying to this message or put them in the form. The sooner we receive your application, the greater the chance of an early response and acceptance.We have some budget for covering the costs of travel for those who need it the most. Accommodation and daily catering with high-quality food (including vegan and vegetarian options) will be provided. The seminar is free of charge.
Finally, if you know someone who you think would be a good candidate for a participant (or a mentor), let us know. Send their contact info with a short explanation of why they would be a good fit.
(Credit: @Spiarrow)
We are open to participating for a fraction of the duration (e.g., 2 or 3 weeks) or even to fully remote participation for exceptional applicants. Participation on-site will require full-focus full-time engagement with the program, and so it will not be possible to reconcile it with a full- or part-time job.