We are organising the 9th edition without funds. We have no personal runway left to do this again. We will not run the 10th edition without funding. 

In a nutshell:

  1. Last month, we put out AI Safety Camp’s funding case
    A private donor then decided to donate €5K. 
     
  2. Five more donors offered $7K on Manifund
    For that $7K to not be wiped out and returned, another $21K in funding is needed. At that level, we may be able to run a minimal version of AI Safety Camp next year, where we get research leads started in the first 2.5 months, and leave the rest to them.
     
  3. The current edition is off to a productive start! 
    A total of 130 participants joined, spread over 26 projects. The projects are diverse – from agent foundations, to mechanistic interpretability, to copyright litigation.
     
  4. Our personal runways are running out. 
    If we do not get the funding, we have to move on. It’s hard to start a program again once organisers move on, so this likely means the end of AI Safety Camp.
     
  5. We commissioned Arb Research to do an impact assessment
    One preliminary result is that AISC creates one new AI safety researcher per around $12k-$30k USD of funding. 

How can you support us:

  • Spread the word. When we tell people AISC doesn't have any money, most people are surprised. If more people knew of our situation, we believe we would get the donations we need.
  • Donate. Make a donation through Manifund to help us reach the $28K threshold.
    Reach out to remmelt@aisafety.camp for other donation options.

New to LessWrong?

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 7:05 PM

Copying from EAF

TL;DR: At least in my experience, AISC was pretty positive for most participants I know and it's incredibly cheap. It also serves a clear niche that other programs are not filling and it feels reasonable to me to continue the program.

I've been a participant in the 2021/22 edition. Some thoughts that might make it easier to decide for funders/donors.
1. Impact-per-dollar is probably pretty good for the AISC. It's incredibly cheap compared to most other AI field-building efforts and scalable.
2. I learned a bunch during AISC and I did enjoy it. It influenced my decision to go deeper into AI safety. It was less impactful than e.g. MATS for me but MATS is a full-time in-person program, so that's not surprising.
3. AISC fills a couple of important niches in the AI safety ecosystem in my opinion. It's online and part-time which makes it much easier to join for many people, it implies a much lower commitment which is good for people who want to find out whether they're a good fit for AIS. It's also much cheaper than flying everyone to the Bay or London. This also makes it more scalable because the only bottleneck is mentoring capacity without physical constraints.
4. I think AISC is especially good for people who want to test their fit but who are not super experienced yet. This seems like an important function. MATS and ARENA, for example, feel like they target people a bit deeper into the funnel with more experience who are already more certain that they are a good fit. 
5. Overall, I think AISC is less impactful than e.g. MATS even without normalizing for participants. Nevertheless, AISC is probably about ~50x cheaper than MATS. So when taking cost into account, it feels clearly impactful enough to continue the project. I think the resulting projects are lower quality but the people are also more junior, so it feels more like an early educational program than e.g. MATS. 
6. I have a hard time seeing how the program could be net negative unless something drastically changed since my cohort. In the worst case, people realize that they don't like one particular type of AI safety research. But since you chat with others who are curious about AIS regularly, it will be much easier to start something that might be more meaningful. Also, this can happen in any field-building program, not just AISC.  
7. Caveat: I have done no additional research on this. Maybe others know details that I'm unaware of. See this as my personal opinion and not a detailed research analysis. 

Maybe I'm being cynical, but I'd give >30% that funders have declined to fund AI Safety Camp in its current form for some good reason. Has anyone written the case against? I know that AISC used to be good by talking to various colleagues, but I have no particular reason to believe in its current quality.

  • MATS has steadily increased in quality over the past two years, and is now more prestigious than AISC. We also have Astra, and people who go directly to residencies at OpenAI, Anthropic, etc. One should expect that AISC doesn't attract the best talent.
    • If so, AISC might not make efficient use of mentor / PI time, which is a key goal of MATS and one of the reasons it's been successful.
  • Why does the founder, Remmelt Ellen, keep linkposting writing by Forrest Landry which I'm 90% sure is obvious crankery? It's not just my opinion; Paul Christiano said "the entire scientific community would probably consider this writing to be crankery", one post was so obviously flawed it gets -46 karma, and generally the community response has been extremely negative. Some AISC work is directly about the content in question. This seems like a concern especially given the philosophical/conceptual focus of AISC projects, and the historical difficulty in choosing useful AI alignment directions without empirical grounding. [Edit: To clarify, this is not meant to be a character attack. I am concerned that Remmelt does not have the skill of distinguishing crankery from good research, even if he has substantially contributed to AISC's success in the past.]
  • All but 2 of the papers listed on Manifund as coming from AISC projects are from 2021 or earlier. Because I'm interested in the current quality in the presence of competing programs, I looked at the two from 2022 or later: this in a second-tier journal and this in a NeurIPS workshop, with no top conference papers. I count 52 participants in the last AISC so this seems like a pretty poor rate, especially given that 2022 and 2023 cohorts (#7 and #8) could both have published by now. (though see this reply from Linda on why most of AISC's impact is from upskilling)
  • The impact assessment was commissioned by AISC, not independent. They also use the number of AI alignment researchers created as an important metric. But impact is heavy-tailed, so the better metric is value of total research produced. Because there seems to be little direct research, to estimate the impact we should count the research that AISC alums from the last two years go on to produce. Unfortunately I don't have time to do this.

MATS has steadily increased in quality over the past two years, and is now more prestigious than AISC. We also have Astra, and people who go directly to residencies at OpenAI, Anthropic, etc. One should expect that AISC doesn't attract the best talent.

  • If so, AISC might not make efficient use of mentor / PI time, which is a key goal of MATS and one of the reasons it's been successful.

AISC isn't trying to do what MATS does. Anecdotal, but for me, MATS could not have replaced AISC (spring 2022 iteration). It's also, as I understand it, trying to have a structure that works without established mentors, since that's one of the large bottlenecks constraining the training pipeline.

Also, did most of the past camps ever have lots of established mentors? I thought it was just the one in 2022 that had a lot? So whatever factors made all the past AISCs work and have participants sing their praises could just still be there.

Why does the founder, Remmelt Ellen, keep posting things described as "content-free stream of consciousness", "the entire scientific community would probably consider this writing to be crankery", or so obviously flawed it gets -46 karma? This seems like a concern especially given the philosophical/conceptual focus of AISC projects, and the historical difficulty in choosing useful AI alignment directions without empirical grounding.

He was posting cranky technical stuff during my camp iteration too. The program was still fantastic. So whatever they are doing to make this work seems able to function despite his crankery. With a five year track record, I'm not too worried about this factor.

All but 2 of the papers listed on Manifund as coming from AISC projects are from 2021 or earlier.

In the first link at least, there are only eight papers listed in total though.  With the first camp being in 2018, it doesn't really seem like the rate dropped much? So to the extent you believe your colleagues that the camp used to be good, I don't think the publication record is much evidence that it isn't anymore. Paper production apparently just does not track the effectiveness of the program much. Which doesn't surprise me, I don't think the rate of paper producion tracks the quality of AIS research orgs much either.

The impact assessment was commissioned by AISC, not independent. They also use the number of AI alignment researchers created as an important metric. But impact is heavy-tailed, so the better metric is value of total research produced. Because there seems to be little direct research, to estimate the impact we should count the research that AISC alums from the last two years go on to produce. Unfortunately I don't have time to do this.

Agreed on the metric being not great, and that an independently commissioned report would be better evidence (though who would have comissioned it?). But ultimately, most of what this report is apparently doing is just asking a bunch of AIS alumni what they thought of the camp and what they were up to, these days.  And then noticing that these alumni often really liked it and have apparently gone on to form a significant fraction of the ecosystem. And I don't think they even caught everyone. IIRC our AISC follow-up LTFF grant wasn't part of the spreadsheets until I wrote Remmelt that it wasn't there. 

I am not surprised by this. Like you, my experience is that most of my current colleagues who were part of AISC tell me it was really good. The survey is just asking around and noticing the same. 
 

I was the private donor who gave €5K. My reaction to hearing that AISC was not getting funding was that this seemed insane. The iteration I was in two years ago was fantastic for me, and the research project I got started on there is basically still continuing at Apollo now. Without AISC, I think there's a good chance I would never have become an AI notkilleveryoneism researcher. 

It feels like a very large number of people I meet in AIS today got their start in one AISC iteration or another, and many of them seem to sing its praises. I think 4/6 people currently on our interp team were part of one of the camps. I am not aware of any other current training program that seems to me like it would realistically replace AISC's role, though I admittedly haven't looked into all of them. I haven't paid much attention to the iteration that happened in 2023, but I happen to know a bunch of people who are in the current iteration and think trying to run a training program for them is an obviously good idea. 

I think MATS and co. are still way too tiny to serve all the ecosystem's needs, and under those circumstances, shutting down a training program with an excellent five year track record seems like an even more terrible idea than usual. On top of that, the research lead structure they've been trying out for this camp and the last one seems to me like it might have some chance of being actually scalable. I haven't spend much time looking at the projects for the current iteration yet, but from very brief surface exposure they didn't seem any worse on average than the ones in my iteration. Which impressed and surprised me, because these projects were not proposed by established mentors like the ones in my iteration were.  A far larger AISC wouldn't be able to replace what a program like MATS does, but it might be able to do what AISC6 did for me, and do it for far more people than anything structured like MATS realistically ever could. 

On a more meta point, I have honestly not been all that impressed with the average competency of the AIS funding ecosystem. I don't think it not funding a project is particularly strong evidence that the project is a bad idea. 

Reply1122

On a more meta point, I have honestly not been all that impressed with the average competency of the AIS funding ecosystem. I don't think it not funding a project is particularly strong evidence that the project is a bad idea. 

I made a different call on AISC, but also think this is right. There aren't a lot of players in the funding ecosystem, especially post-FTX there isn't a lot of non-OpenPhil money around, and I generally only weakly update on people succeeding to get funding or failing to get funding.

Thanks, this is pretty reassuring. Mostly due to the nonpersonal details about how AISC fits in the pipeline, but also because your work at Apollo is proprietary and so it's not a bad sign it wasn't published.

I originally found this comment helpful, but have now found other comments pushing back against it to be more helpful. Upon reflection, I don't think the comparison to MATS is very useful (a healthy field will have a bunch of intro programs), the criticism of Remmelt is less important given that Linda is responsible for most of the projects, the independence of the impact assessment is not crucial, and the lack of papers is relatively unsurprising given that it's targeting earlier-stage researchers/serving as a more introductory funnel than MATS.

I thought some about the AI Safety camp for the LTFF. I mostly evaluated the research leads they listed and the resulting teams directly, for the upcoming program (which was I think the virtual one in 2023). 

I felt unexcited about almost all the research directions and research leads, and the camp seemed like it was aspiring to be more focused on the research lead structure than past camps, which increased the weight I was assigning to my evaluation of those research directions. I considered for a while to fund just the small fraction of research lead teams I was excited about, but it was only a quite small fraction, and so recommended against funding it.

It did seem to me that the quality of research leads was very marketly worse by my lights than past years, so I didn't feel comfortable just doing an outside-view on the impact of past camps (as the ARB report seems to do). I feel pretty good about the past LTFF grants to the past camps but my expectations for post-2021 camps were substantially worse than earlier camps, looking at the inputs and plans, so my expectation of the value of it substantially changed.

Good to have more details on your views here.

That’s useful.

Before, we could only personally go on and share with donors the following:

“His guess, he replied, was that he was not currently super interested in most of the projects we found RLs for, and not super interested in the "do not build uncontrollable AI" area.” [or "AI non-safety" stream, as we called it at the time]

That was still better than nothing. And overall, I appreciate the honesty and openness with which you have shared your views over the years.

  • MATS has steadily increased in quality over the past two years, and is now more prestigious than AISC. We also have Astra, and people who go directly to residencies at OpenAI, Anthropic, etc. One should expect that AISC doesn't attract the best talent.


There is so much wrong here, I don't even know how to start (i.e. I don't know what the core cruxes are) but I'll give it a try. 

I AISC is not MATS because we're not trying to be MATS. 

MATS is trying to find the best people and have them mentored by the best mentors, in the best environment. This is great! I'd recommend MATS to anyone who can get in. However it's not scalable. After MATS has taken the top talent and mentors, there are still dosens of people who can mentor and would be happy to do so, and hundreds of people who it is worth mentoring.

To believe that MATS style program is the only program worth running, you have to believe that

  1. Only the top talent matter
  2. MATS and similar program has perfect selection, i.e. no-one worth accepting is ever rejected.

I'm not going to argue about 1. I suspect it's wrong, but I'm not very sure.

However, believing in 1 is not enough. You also need 2, and believing in 2 is kind of insane. I don't know how else to put it. Sorry.

You're absolutely correct that AISC have lower average talent. But because we have a lower bar, we get the talent that MATS and other prestigious programs are missing. 

AISC is this way by design. The idea of AISC is to give as many people as we can the chance to join the AI safety effort, to try the waters, or to show the world what they can do, or to get inspiration to do something else. 

And I'm not even addressing the accessibility of a part time online program. There are people who can't join MATS and similar, because they can't take the time to do so, but can join AISC. 

Also, if you believe strongly in MATS ability to select for talent, then consider that some AISC participants go to attend MATS later. I think this fact proves my point, that AISC can support people that MATS selection proses don't yet recognise.

  • If so, AISC might not make efficient use of mentor / PI time, which is a key goal of MATS and one of the reasons it's been successful.

This is again missing the point. The deal AISC offers to our research leads, is that they provide a project and we help them find people to work with them. So far our research leads have been very happy with this arrangement.

MATS is drawing their mentors from a small pool of well known people. This means that they have to make the most out of a very scarce resource. We're not doing that. 

AISC has an open application for people interested in leading a project. This way we get research leads you've never heard of, and who are happy to spend time on AISC in exchange for extra hands on their projects. 

One reason AISC is much more scalable than MATS is that we're drawing from a much larger pool of "mentors".

 

At this point, someone might think: So AISC has inexperienced mentors leading inexperienced participants.  How does this possibly go well?

This is not a trivial question. This is a big part of what the current version of ASIC is focusing on solving. First of all, a research lead is not the same as a mentor. Research leads are welcome to provide mentorship to it's participants, but that's not their main role.  

The research leads role is to suggest a project and formulate a project plan, and then to lead that project. This is actually much easier to do than to provide general mentorship. 

A key part of this are the project plans. As part of the application proses for research leads, we require them to write down a project plan. When necessary, we help them with this. 

Another key part of how AISC is successful with less experienced "mentors", is that we require our research leads to take active part in their projects. This obviously takes up more of their time, but also makes things work better, and to a large extent makes up for the research leads being less experienced than in other programs. And as mentioned, we get lots of project leads who are happy with this arrangement.



What the participants get is learning by doing by being part of a project that at least aims to reduce AI risk.

Some of our participants comes from AI safety Fundamentals and other such courses. Other people are professionals with various skills and talent, but not yet much involvement in AI Safety. We help these people to take the step from AI safety student or AI safety concerned professional, to being someone who actually do something. 

Going from just thinking and learning, to actively engaging, is a very big step, and a lot of people would not have taken that step, or taken it later, if not for AISC.

I see your concern. 

Me and Remmelt have different beliefs about AI risk, which is why the last AISC was split into two streams.  Each of us are allowed to independently accept project into our own stream.

Remmelt believes that AGI alignment is impossible, i.e. there is no way to make AGI safe. Exactly why Remmelt believes this is complicated, and something I my self is still trying to understand, however this is actually not very important for AISC. 

The consequence of this for this on AISC is that Remmelt is only interested in project that aims to stop AI progress. 

I still think that alignment is probably technically possible, but I'm not sure. I also believe that even if alignment is possible, we need more time to solve it. Therefore, I see project that aim to stop or slow down AI progress as good, as long as there are not too large adverse side-effect. Therefore, I'm happy to have Remmelt and the projects in his stream as part of AISC. Not to mention that me an Remmelt work really well together, despite or different beliefs.  

If you check our website, you'll also notice that most of the projects are in my stream. I've been accepting any project as long as the there is a reasonable plan,  there is a theory of change under some reasonable and self consistent assumptions, and the downside risk is not too large. 

I've bounced around a lot in AI safety, trying out different ideas, stared more research projects than I finished, which has given me a wide view of different perspectives. I've updated many times in many directions, which have left me with a wide uncertainty as to what perspective is correct. This is reflected in what projects I accept to AISC. I believe in a "lets try everything" approach. 

 

At this point, someone might think: If AISC is not filtering the project more than just "seems worth a try", then how do AISC make sure not to waist participants time on bad projects.

Our participants are adults, and we treat them as such. We do our best to present what AISC is, and what to expect, and then let people decide for themselves if it seems like something worth their time.

We also require research leads to do the same. I.e. the project plan has to provide enough information for potential participants to judge if this is something they want to join. 

I believe there is a significant chance that the solution to alignment is something no-one has though of yet. I also believe that the only way to do intellectual exploration is to let people follow their own ideas, and avoid top down curation. 

The only thing I filter hard for in my stream is that the research lead actually need to have a theory of change. They need to have actually though about AI risk, and why their plan could make a difference. I had this conversation with every research lead in my stream. 

We had one person last AISC who said that they regretted joining AISC, because they could have learned more from spending that time on other things. I take that feedback seriously. But on the other hand, I've regularly meet alumni who tell me how useful AISC was for them, which convinces me AISC is clearly very net positive. 

However, if we where not understaffed (due to being underfunded), we could do more to support the research leads to make better projects.

I also believe that even if alignment is possible, we need more time to solve it.

The “Do Not Build Uncontrollable AI” area is meant for anyone to join who have this concern.

The purpose of this area is to contribute to restricting corporations from recklessly scaling the training and uses of ML models.

I want the area to be open for contributors who think that:

  1. we’re not on track to solving safe control of AGI; and/or
  2. there are fundamental limits to the controllability of AGI, and unfortunately AGI cannot be kept safe over the long term; and/or
  3. corporations are causing increasing harms in how they scale uses of AI models.

After thinking about this over three years, I now think 1.-3. are all true. I would love more people who hold any of these views to collaborate thoughtfully across the board!

  • All but 2 of the papers listed on Manifund as coming from AISC projects are from 2021 or earlier. Because I'm interested in the current quality in the presence of competing programs, I looked at the two from 2022 or later: this in a second-tier journal and this in a NeurIPS workshop, with no top conference papers. I count 52 participants in the last AISC so this seems like a pretty poor rate, especially given that 2022 and 2023 cohorts (#7 and #8) could both have published by now.
  • [...] They also use the number of AI alignment researchers created as an important metric. But impact is heavy-tailed, so the better metric is value of total research produced. Because there seems to be little direct research, to estimate the impact we should count the research that AISC alums from the last two years go on to produce. Unfortunately I don't have time to do this.

That list of papers is for direct research output of AISC. Many of our alumni have lots of publications not on that list. 

For example, I looked up Marius Hobbhahn - Google Scholar

Just looking at the direct project outputs is not a good metric for evaluating AISC since most of the value comes from the upskilling. Counting the research that AISC alumns have done since AISC, is not a bad idea, but as you say, a lot more work, I imagine this is partly why Arb chose to do it the way they did. 

I agree that heavy tailed-ness in research output is an important considerations. AISC do have some very successful alumni. If we didn't this would be a major strike against AISC. The thing I'm less certain of is to what extent these people would have succeeded without AISC. This is obviously a difficult thing to evaluate, but still worth trying. 

Mostly we let Arb decide how to best to their evaluation, but I've specifically asked them to interview our most successful alumni to at least get these peoples estimate of the importance of AISC. The result of this will be presented in their second report.

Thanks Thomas for asking these questions. 

I think some of these are common concerns about AISC, partly because we have not always been very clear in our communication. This was a good opportunity for us to clarify. 

I'm confused how much I should care whether an impact assessment is commissioned by some organization. The main thing I generally look for is whether the assessment / investigation is independent. The argument is that because AISC is paying for it, that will influence the assessors? 

My guess is it matters a lot, even if people aspire towards independence. I would update if someone has a long track record of clearly neutral-seeming reports for financial compensation, but I think in the absence of such a track record, my prior would be that people are very rarely capable of making strong negative public statements about people who are paying them.

This is a one of thing though. We're not likely to continue to pay them, regardless of what they report. 

[-]habryka3mo1310

I do think that helps, but I don't think it helps that much. People don't pursue super naive CDT-ish decision theories. 

In-practice this shakes out in a feeling of being indebted to whoever pays you and a pretty strong hesitation to do something that would upset them, even if they weren't going to pay you more anyways. Also, few games are actually really only single-iteration. You will likely continue interacting in one way or another, and Arb will interact with other clients, making this have more of an iterated nature. 

This is an incisive description, and I agree.

I agree. I also expect evaluators commissioned to do an evaluation to rarely dare to speak up against the organisation whose folks they chatted with and gave them money. I wished it was different, but got to be realistic here.

This depends on how much you trust the actors involved.

I know that me and Remmelt asked for an honest evaluation, and did not try to influence the result. But you don't know this.

Me and Remmelt obviously believe in AISC, otherwise we would not keep running these programs. But since AISC has been chronically understaffed (like most non-profit initiatives) we have not had time to do a proper follow-up study. When we asked Arb to do this assessment, it was in large part to test our own believes. So far nothing surprising has came out of the investigation, which is reassuring. But if Arb found something bad, I would not want them to hide it.

Here's some other evaluations of AISC (and other things) that where not commissioned by us. I think for both of them, they did not even talk to someone from AISC before posting, although for the second link, this was only due to miscommunication. 

Cross-posting reply from EA Forum

Glad you raised these concerns!

I suggest people actually dig themselves for evidence as to whether the program is working.

The first four points you raised seem to rely on prestige or social proof. While those can be good indicators of merit, they are also gameable.

Ie.

  • one program can focus on ensuring they are prestigious (to attract time-strapped alignment mentors and picky grantmakers)
  • another program can decide not to (because they’re not willing to sacrifice other aspects they care about).

If there is one thing you can take away from Linda and I is that we do not focus on acquiring prestige. Even the name “AI Safety Camp” is not prestigious. It sounds kinda like a bootcamp. I prefer the name because it keeps away potential applicants who are in it for the social admiration or influence.

AISC might not make efficient use of mentor / PI time, which is a key goal of MATS and one of the reasons it's been successful.

You are welcome to ask research leads of the current edition.

Note from the Manifund post:

“Resource-efficiency: We are not competing with other programs for scarce mentor time. Instead, we prospect for thoughtful research leads who at some point could become well-recognized researchers.”

All but 2 of the papers listed on Manifund as coming from AISC projects are from 2021 or earlier… Because I'm interested in the current quality in the presence of competing programs, I looked at the two from 2022 or later: this in a second-tier journal and this in a NeurIPS workshop, with no top conference papers.

We also do not focus on getting participants to submit papers to highly selective journals or ML conferences (though not necessarily highly selective for quality of research with regards to preventing AI-induced extinction).

AI Safety Camp is about enabling researchers that are still on the periphery of the community to learn by doing and test their fit for roles in which they can help ensure future AI are safe.

So the way to see the papers that were published is what happened after organisers did not optimise for the publication of papers, and some came out anyway.

Most groundbreaking AI Safety research that people now deem valuable was not originally published in a peer-reviewed journal. I do not think we should aim for prestigious venues now.

I would consider published papers as part of a ‘sanity check’ for evaluating editions after the fact. If the relative number of (weighted) published papers, received grants, and org positions would have gone down for later editions, that would have been concerning. You are welcome to do your own analysis here.

Because there seems to be little direct research…

What do you mean with this claim?

If you mean research outputs, I would suggest not just focussing on peer-reviewed papers but include LessWrong/AF posts as well. Here is an overview of ~50 research outputs from past camps.

Again, AI Safety Camp acts as a training program for people who are often new to the community. The program is not like MATS in that sense.

It is relevant to consider the quality of research thinking coming out of the camp. If you or someone else had the time to look through some of those posts, I’m curious to get your sense.

Why does the founder, Remmelt Ellen, keep posting things described as…

For the record, I’m at best a co-founder. Linda was the first camp’s initiator. Credit to her.

Now on to your point:

If you clicked through Paul’s somewhat hyperbolic comment of “the entire scientific community would probably consider this writing to be crankery” and consider my response, what are your thoughts on whether that response is reasonable or not? Ie. consider whether the response is relevant, soundly premised, and consistently reasoned.

If you really want social proof, consider that the ex-Pentagon engineer whom Paul was reacting to got $170K in funding from SFF and has now discussed the argument in-depth for 6 hours with a long-time research collaborator (Anders Sandberg). If you would ask Anders about the post about causality limits described by a commenter as “stream of consciousness”, Anders could explain to you what the author intended to convey.

Perhaps dismissing a new relevant argument out of hand, particularly if it does not match intuitions and motivations common to our community, is not the best move?

Acknowledging here: I should not have shared some of those linkposts because they were not polished enough and did not do a good job at guiding people through the reasoning about fundamental controllability limits and substrate-needs convergence. That ended up causing more friction. My bad. → Edit: more here

The first four points you raised seem to rely on prestige or social proof.

I'm trying to avoid applying my own potentially biased judgement, and it seems pretty necessary to use either my own judgement or some kind of social proof. I admit this has flaws.

But I also think that the prestige of programs like MATS makes the talent quality extremely high (though I may believe Linda on why this is okay), and that Forrest Landry's writing is probably crankery and if alignment is impossible it's likely for a totally different reason.

We also do not focus on getting participants to submit papers to highly selective journals or ML conferences (though not necessarily highly selective for quality of research with regards to preventing AI-induced extinction).

I think we just have different attitudes to this. I will note that ML conferences have other benefits, like networking, talking to experienced researchers, and getting a sense for the field (for me going to ICML and NeurIPS was very helpful), and for domains people already care about, peer review is a basic check that work is "real"-- novel, well-communicated, and meeting some minimum quality bar. Interpretability is becoming one of those domains.

It is relevant to consider the quality of research thinking coming out of the camp. If you or someone else had the time to look through some of those posts, I’m curious to get your sense.

I unfortunately don't have the time or expertise to do this, because these posts are in many different areas. One I do understand is this post because it cites mine and I know Jeremy Gillen. The quality and volume of work seem a bit lower than my post, which took 9 person-weeks and is (according to me) not quite good enough to publish or further pursue, though it may develop into a workshop paper. The soft optimization post took 24 person-weeks (assuming 4 people half-time for 12 weeks) plus some of Jeremy's time. I had no training in probability theory or statistics, although I was somewhat lucky in finding a topic that did not require it.

If you clicked through Paul’s somewhat hyperbolic comment of “the entire scientific community would probably consider this writing to be crankery” and consider my response, what are your thoughts on whether that response is reasonable or not? Ie. consider whether the response is relevant, soundly premised, and consistently reasoned.

I have no idea because I don't understand it. It reads vaguely like a summary of crankery. Possibly I would need to read Forrest Landry's work, but given that it's also difficult to read and I currently give 90%+ that it's crankery, you must understand why I don't.

The soft optimization post took 24 person-weeks (assuming 4 people half-time for 12 weeks) plus some of Jeremy's time.

Team member here. I think this is a significant overestimate, I'd guess at 12-15 person-weeks. If it's relevant I can ask all former team members how much time they spent; it was around 10h per week for me. Given that we were beginners and spent a lot of time learning about the topic, I feel we were doing fine and learnt a lot. 

Working on this part-time was difficult for me and the fact that people are not working on these things full-time in the camp should be considered when judging research output.

I have no idea because I don't understand it. It reads vaguely like a summary of crankery. Possibly I would need to read Forrest Landry's work, but given that it's also difficult to read...

This is honest.
Maybe it would be good to wait for people who can spend the time to consider the argument to come back on this?

I mentioned that Anders Sandberg has spent 6 hours discussing the argument in-depth. Several others are looking into the argument.

What feels concerning is when people rely on surface-level impressions, such as the ones you cited, to make judgements about an argument where the inferential gap is high.

It’s not good for the epistemic health of our community when insiders spread quick confident judgements about work by outside researchers. It can create an epistemic echo chamber.

...and I currently give 90%+ that it's crankery, you must understand why I don't

I do get this, given the sheer number of projects in AI Safety that may seem worth considering.

Having said that, the argument is literally about why AGI could not be sufficiently controlled to stay safe.

  • Even if your quick probability guess is 95% for the reasoning being scientifically unsound, what about the remaining 5%?

  • What is the value of information given the possibility of discovering that alignment efforts will unfortunately not work out? How much would such a discovery change our actions, and the areas of action we would explore and start to understand better?

Historically, changes in scientific paradigms came from unexpected places. Arguments were often written in ways that felt weird and inscrutable to insiders (take a look at Gödel's first incompleteness theorem).

  • How much should a community rely on people's first intuitions on whether some new supposedly paradigm-shifting argument is crankery or not?

  • Should the presentation of a formal argument (technical proof) be judged on the basis of social proof?

The impact assessment was commissioned by AISC, not independent.

This is a valid concern. I have worried about conflicts of interest.

I really wanted the evaluators at Arb to do neutral research, without us organisers getting in the way. Linda and I both emphasised this at an orienting call they invited us too.

From Arb’s side, Gavin deliberately stood back and appointed Sam Holton as the main evaluator, who has no connections with AI Safety Camp. Misha did participate in early editions of the camp though.

All in, this is enough to take the report with a grain of salt. Worth picking apart the analysis and looking for any unsound premises

What are the actual costs of running AISC? I participated in it some time ago, kinda participating this year again (it's complicated).  As far as I can tell, the only things that are required is some amount of organization, and then maybe a paid slack workspace. Is this just about salaries for the organizers?

[-]Eli_3mo142

The answer seems to be yes. 

On the manifund page it says the following: 

Virtual AISC - Budget version 

Software etc 
$2K
Organiser salaries, 2 ppl, 4 months
$56K
Stipends for participants 
$0

Total $58K

In the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program.

Salaries are calculated based on $7K per person per month.

Based on the minimum threshold of $28k, that would seem to offer about 2 ppl for 2 months. 

What exactly is the minimum amount to organise an AISC is a bit complicated. 

We could do a super budget version for under $58k which is even more streamlined. This would cut in to quality however. But the bigger problem is this (just speaking for myself):

  • If AISC pays enough for me to live frugally on this salary for the rest of the year, then I can come back an organise another one. (And as a bonus the world also get what ever else I do during the rest of the year, which will probably also be AI safety related.)
  • If that is not the case, I need to have a different primary income, and then I can't promise I'll be available for AISC.

Exactly what is that threshold? I don't know. It depends on my partners income, which is also a bit uncertain. 

If I'm not available, is it possible to get someone else. Maybe, I'm not sure. My role requires both organising skill and AI safety knowledge.  Most people who are qualified are busy. Also a new person would have to initially put in more hours. Me and Remmelt have a lot of experience doing AISC together, which means we can get it done quicker than someone new. 

We're also fundraising on our website. aisafety.camp
I think that Remmelt chose the $28k threshold, hoping we'll get some money though other channels too. Currently we got ~5.5k donation not through Manifund. 

If we get to the $28k threshold, and nothing more, we'll try to do something approximately like a next AISC, some how. But in this case I'll probably quit after that. 

Regardless of whether or not it's AI Safety Camp, I think it's important to have at least one intro-level research program, particularly because applications for programs like SERI MATS ask about previous research experience in the application.

I can see merit both in Oliver's views about the importance of nudging people down useful research directions and Linda's views on assuming that participants are adults. Still undecided on who I ultimately end up agreeing with, so would love to hear other people's opinions.

I appreciate the openness of your inquiry here.

I don't really get why this wouldn't get funded. 

I'm not sure either, but here's my current model:
Even though it looks pretty likely that AISC is an improvement on no-AISC, there are very few potential funders:
1) EA-adjacent caritative organizations.
2) People from AIS/rat communities.

Now, how to explain their decisions?
For the former, my guess would be a mix of not having heard of/received an application from AISC and preferring to optimize heavily towards top-rated charities. AISC's work is hard to quantify, as you can tell from the most upvoted comments, and that's a problem when you're looking for projects to invest because you need to avoid being criticized for that kind of choice if it turns out AISC is crackpotist/a waste of funds. The Copenhagen interpretation of ethics applies hard there for an opponent with a tooth against the organization.
For the latter, it depends a lot on individual people, but here are the possibilities that come to mind:
- Not wanting donate anything but feeling like having to, which leads to large donations to few projects when you feel like donating enough to break the status quo bias.
- Being especially mindful of one's finances and donating only to preferred charities, because of a personal attachment (again, not likely to pick AISC a priori) or because they're provably effective.

To answer 2), you can say why you don't donate to AISC? Your motivations are probably very similar to other potential donators here.